Test Report: KVM_Linux_crio 22047

                    
                      4655c6aa5049635fb4cb98fc0f74f66a1c57dbdb:2025-12-06:42658
                    
                

Test fail (14/431)

x
+
TestAddons/parallel/Ingress (156.33s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-774690 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-774690 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-774690 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004860254s
I1206 09:15:24.805011  396534 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-774690 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.069173475s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-774690 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.249
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-774690 -n addons-774690
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 logs -n 25: (1.142814232s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-548578                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-548578 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ --download-only -p binary-mirror-961783 --alsologtostderr --binary-mirror http://127.0.0.1:35409 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-961783 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ delete  │ -p binary-mirror-961783                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-961783 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ addons  │ disable dashboard -p addons-774690                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ addons  │ enable dashboard -p addons-774690                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ start   │ -p addons-774690 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-774690 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-774690 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ enable headlamp -p addons-774690 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-774690 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:14 UTC │ 06 Dec 25 09:14 UTC │
	│ addons  │ addons-774690 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh     │ addons-774690 ssh cat /opt/local-path-provisioner/pvc-6faf3b95-bd02-4761-afb7-95d974158c7c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ip      │ addons-774690 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh     │ addons-774690 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774690                                                                                                                                                                                                                                                                                                                                                                                         │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ addons  │ addons-774690 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:16 UTC │
	│ ip      │ addons-774690 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-774690        │ jenkins │ v1.37.0 │ 06 Dec 25 09:17 UTC │ 06 Dec 25 09:17 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:12:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:12:21.264725  397455 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:12:21.265041  397455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:21.265053  397455 out.go:374] Setting ErrFile to fd 2...
	I1206 09:12:21.265059  397455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:21.265288  397455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:12:21.265896  397455 out.go:368] Setting JSON to false
	I1206 09:12:21.266842  397455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3281,"bootTime":1765009060,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:12:21.266908  397455 start.go:143] virtualization: kvm guest
	I1206 09:12:21.269023  397455 out.go:179] * [addons-774690] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:12:21.270564  397455 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:12:21.270608  397455 notify.go:221] Checking for updates...
	I1206 09:12:21.272959  397455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:12:21.274303  397455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:12:21.275586  397455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:12:21.277028  397455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:12:21.278359  397455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:12:21.279684  397455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:12:21.310872  397455 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 09:12:21.312242  397455 start.go:309] selected driver: kvm2
	I1206 09:12:21.312259  397455 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:12:21.312274  397455 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:12:21.313315  397455 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:12:21.313622  397455 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:12:21.313656  397455 cni.go:84] Creating CNI manager for ""
	I1206 09:12:21.313700  397455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:12:21.313723  397455 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:12:21.313784  397455 start.go:353] cluster config:
	{Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1206 09:12:21.313931  397455 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:12:21.315576  397455 out.go:179] * Starting "addons-774690" primary control-plane node in "addons-774690" cluster
	I1206 09:12:21.316897  397455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:21.316929  397455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:21.316951  397455 cache.go:65] Caching tarball of preloaded images
	I1206 09:12:21.317038  397455 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:12:21.317049  397455 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 09:12:21.317363  397455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/config.json ...
	I1206 09:12:21.317385  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/config.json: {Name:mk4ced784f71219404f915ebf50e084aa875dc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:21.317539  397455 start.go:360] acquireMachinesLock for addons-774690: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:12:21.317585  397455 start.go:364] duration metric: took 31.534µs to acquireMachinesLock for "addons-774690"
	I1206 09:12:21.317602  397455 start.go:93] Provisioning new machine with config: &{Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:12:21.317657  397455 start.go:125] createHost starting for "" (driver="kvm2")
	I1206 09:12:21.319345  397455 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1206 09:12:21.319530  397455 start.go:159] libmachine.API.Create for "addons-774690" (driver="kvm2")
	I1206 09:12:21.319566  397455 client.go:173] LocalClient.Create starting
	I1206 09:12:21.319684  397455 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem
	I1206 09:12:21.386408  397455 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem
	I1206 09:12:21.505973  397455 main.go:143] libmachine: creating domain...
	I1206 09:12:21.505995  397455 main.go:143] libmachine: creating network...
	I1206 09:12:21.507603  397455 main.go:143] libmachine: found existing default network
	I1206 09:12:21.507893  397455 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:12:21.508529  397455 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d26980}
	I1206 09:12:21.508651  397455 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-774690</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:12:21.515041  397455 main.go:143] libmachine: creating private network mk-addons-774690 192.168.39.0/24...
	I1206 09:12:21.584983  397455 main.go:143] libmachine: private network mk-addons-774690 192.168.39.0/24 created
	I1206 09:12:21.585281  397455 main.go:143] libmachine: <network>
	  <name>mk-addons-774690</name>
	  <uuid>0f5e1b32-f92f-4225-b6b5-e6d16a15f14d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:8f:87:80'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 09:12:21.585328  397455 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690 ...
	I1206 09:12:21.585351  397455 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:12:21.585361  397455 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:12:21.585432  397455 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-392561/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 09:12:21.864874  397455 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa...
	I1206 09:12:21.887234  397455 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/addons-774690.rawdisk...
	I1206 09:12:21.887282  397455 main.go:143] libmachine: Writing magic tar header
	I1206 09:12:21.887324  397455 main.go:143] libmachine: Writing SSH key tar header
	I1206 09:12:21.887408  397455 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690 ...
	I1206 09:12:21.887470  397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690
	I1206 09:12:21.887497  397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690 (perms=drwx------)
	I1206 09:12:21.887516  397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines
	I1206 09:12:21.887529  397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines (perms=drwxr-xr-x)
	I1206 09:12:21.887541  397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:12:21.887549  397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube (perms=drwxr-xr-x)
	I1206 09:12:21.887559  397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561
	I1206 09:12:21.887567  397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561 (perms=drwxrwxr-x)
	I1206 09:12:21.887577  397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 09:12:21.887584  397455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 09:12:21.887595  397455 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 09:12:21.887602  397455 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 09:12:21.887612  397455 main.go:143] libmachine: checking permissions on dir: /home
	I1206 09:12:21.887618  397455 main.go:143] libmachine: skipping /home - not owner
	I1206 09:12:21.887624  397455 main.go:143] libmachine: defining domain...
	I1206 09:12:21.888933  397455 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-774690</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/addons-774690.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-774690'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:12:21.897421  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:d6:5e:ab in network default
	I1206 09:12:21.898138  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:21.898161  397455 main.go:143] libmachine: starting domain...
	I1206 09:12:21.898168  397455 main.go:143] libmachine: ensuring networks are active...
	I1206 09:12:21.899205  397455 main.go:143] libmachine: Ensuring network default is active
	I1206 09:12:21.899683  397455 main.go:143] libmachine: Ensuring network mk-addons-774690 is active
	I1206 09:12:21.900484  397455 main.go:143] libmachine: getting domain XML...
	I1206 09:12:21.901800  397455 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-774690</name>
	  <uuid>6637641e-4385-4e2f-bcf4-adf9edc82956</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/addons-774690.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:02:15:5c'/>
	      <source network='mk-addons-774690'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d6:5e:ab'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 09:12:23.218876  397455 main.go:143] libmachine: waiting for domain to start...
	I1206 09:12:23.220380  397455 main.go:143] libmachine: domain is now running
	I1206 09:12:23.220398  397455 main.go:143] libmachine: waiting for IP...
	I1206 09:12:23.221161  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:23.221664  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:23.221694  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:23.221947  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:23.221995  397455 retry.go:31] will retry after 295.292313ms: waiting for domain to come up
	I1206 09:12:23.518620  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:23.519222  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:23.519240  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:23.519595  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:23.519635  397455 retry.go:31] will retry after 377.089345ms: waiting for domain to come up
	I1206 09:12:23.898090  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:23.898641  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:23.898659  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:23.898945  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:23.899004  397455 retry.go:31] will retry after 397.605073ms: waiting for domain to come up
	I1206 09:12:24.299024  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:24.299615  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:24.299637  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:24.299976  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:24.300018  397455 retry.go:31] will retry after 489.121787ms: waiting for domain to come up
	I1206 09:12:24.790564  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:24.791070  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:24.791086  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:24.791356  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:24.791400  397455 retry.go:31] will retry after 547.775883ms: waiting for domain to come up
	I1206 09:12:25.341430  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:25.342187  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:25.342205  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:25.342553  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:25.342606  397455 retry.go:31] will retry after 575.42966ms: waiting for domain to come up
	I1206 09:12:25.919580  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:25.920138  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:25.920157  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:25.920537  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:25.920595  397455 retry.go:31] will retry after 942.250925ms: waiting for domain to come up
	I1206 09:12:26.864846  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:26.865422  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:26.865438  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:26.865763  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:26.865801  397455 retry.go:31] will retry after 1.477195332s: waiting for domain to come up
	I1206 09:12:28.345783  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:28.346336  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:28.346356  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:28.346643  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:28.346693  397455 retry.go:31] will retry after 1.655335883s: waiting for domain to come up
	I1206 09:12:30.004609  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:30.005128  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:30.005142  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:30.005422  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:30.005462  397455 retry.go:31] will retry after 1.662112692s: waiting for domain to come up
	I1206 09:12:31.670161  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:31.670814  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:31.670832  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:31.671153  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:31.671199  397455 retry.go:31] will retry after 2.355274201s: waiting for domain to come up
	I1206 09:12:34.029809  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:34.030267  397455 main.go:143] libmachine: no network interface addresses found for domain addons-774690 (source=lease)
	I1206 09:12:34.030279  397455 main.go:143] libmachine: trying to list again with source=arp
	I1206 09:12:34.030531  397455 main.go:143] libmachine: unable to find current IP address of domain addons-774690 in network mk-addons-774690 (interfaces detected: [])
	I1206 09:12:34.030566  397455 retry.go:31] will retry after 2.915121356s: waiting for domain to come up
	I1206 09:12:36.946965  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:36.947469  397455 main.go:143] libmachine: domain addons-774690 has current primary IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:36.947482  397455 main.go:143] libmachine: found domain IP: 192.168.39.249
	I1206 09:12:36.947490  397455 main.go:143] libmachine: reserving static IP address...
	I1206 09:12:36.947817  397455 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-774690", mac: "52:54:00:02:15:5c", ip: "192.168.39.249"} in network mk-addons-774690
	I1206 09:12:37.143042  397455 main.go:143] libmachine: reserved static IP address 192.168.39.249 for domain addons-774690
	I1206 09:12:37.143071  397455 main.go:143] libmachine: waiting for SSH...
	I1206 09:12:37.143079  397455 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 09:12:37.145931  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.146471  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.146514  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.146803  397455 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:37.147068  397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1206 09:12:37.147081  397455 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 09:12:37.266580  397455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:12:37.266986  397455 main.go:143] libmachine: domain creation complete
	I1206 09:12:37.268496  397455 machine.go:94] provisionDockerMachine start ...
	I1206 09:12:37.270840  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.271142  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.271165  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.271314  397455 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:37.271534  397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1206 09:12:37.271547  397455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:12:37.385585  397455 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 09:12:37.385627  397455 buildroot.go:166] provisioning hostname "addons-774690"
	I1206 09:12:37.388866  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.389328  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.389352  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.389736  397455 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:37.389963  397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1206 09:12:37.389977  397455 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-774690 && echo "addons-774690" | sudo tee /etc/hostname
	I1206 09:12:37.522119  397455 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-774690
	
	I1206 09:12:37.525221  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.525593  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.525622  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.525971  397455 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:37.526200  397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1206 09:12:37.526223  397455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-774690' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-774690/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-774690' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:12:37.654727  397455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:12:37.654771  397455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 09:12:37.654795  397455 buildroot.go:174] setting up certificates
	I1206 09:12:37.654816  397455 provision.go:84] configureAuth start
	I1206 09:12:37.657627  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.658129  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.658152  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.660461  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.660865  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.660927  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.661069  397455 provision.go:143] copyHostCerts
	I1206 09:12:37.661131  397455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 09:12:37.661240  397455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 09:12:37.661296  397455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 09:12:37.661342  397455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.addons-774690 san=[127.0.0.1 192.168.39.249 addons-774690 localhost minikube]
	I1206 09:12:37.716816  397455 provision.go:177] copyRemoteCerts
	I1206 09:12:37.716878  397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:12:37.719884  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.720344  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.720372  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.720600  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:37.811076  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:12:37.852903  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:12:37.881451  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 09:12:37.909622  397455 provision.go:87] duration metric: took 254.786505ms to configureAuth
	I1206 09:12:37.909655  397455 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:12:37.909870  397455 config.go:182] Loaded profile config "addons-774690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:12:37.913098  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.913433  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:37.913450  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:37.913601  397455 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:37.913856  397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1206 09:12:37.913875  397455 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:12:38.157342  397455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:12:38.157399  397455 machine.go:97] duration metric: took 888.860096ms to provisionDockerMachine
	I1206 09:12:38.157414  397455 client.go:176] duration metric: took 16.83784032s to LocalClient.Create
	I1206 09:12:38.157441  397455 start.go:167] duration metric: took 16.837921755s to libmachine.API.Create "addons-774690"
	I1206 09:12:38.157455  397455 start.go:293] postStartSetup for "addons-774690" (driver="kvm2")
	I1206 09:12:38.157466  397455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:12:38.157549  397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:12:38.160853  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.161278  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:38.161309  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.161525  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:38.250161  397455 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:12:38.254967  397455 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:12:38.255000  397455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 09:12:38.255067  397455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 09:12:38.255090  397455 start.go:296] duration metric: took 97.627973ms for postStartSetup
	I1206 09:12:38.258373  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.258978  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:38.259015  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.259296  397455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/config.json ...
	I1206 09:12:38.259548  397455 start.go:128] duration metric: took 16.941878107s to createHost
	I1206 09:12:38.261660  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.261985  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:38.262004  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.262151  397455 main.go:143] libmachine: Using SSH client type: native
	I1206 09:12:38.262348  397455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I1206 09:12:38.262357  397455 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:12:38.376010  397455 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765012358.336236686
	
	I1206 09:12:38.376034  397455 fix.go:216] guest clock: 1765012358.336236686
	I1206 09:12:38.376042  397455 fix.go:229] Guest: 2025-12-06 09:12:38.336236686 +0000 UTC Remote: 2025-12-06 09:12:38.259562404 +0000 UTC m=+17.047061298 (delta=76.674282ms)
	I1206 09:12:38.376058  397455 fix.go:200] guest clock delta is within tolerance: 76.674282ms
	I1206 09:12:38.376064  397455 start.go:83] releasing machines lock for "addons-774690", held for 17.058470853s
	I1206 09:12:38.379190  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.379761  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:38.379789  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.380398  397455 ssh_runner.go:195] Run: cat /version.json
	I1206 09:12:38.380556  397455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:12:38.383688  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.383940  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.384183  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:38.384214  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.384395  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:38.384410  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:38.384426  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:38.384664  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:38.467260  397455 ssh_runner.go:195] Run: systemctl --version
	I1206 09:12:38.503268  397455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:12:38.666096  397455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:12:38.673387  397455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:12:38.673458  397455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:12:38.693372  397455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:12:38.693402  397455 start.go:496] detecting cgroup driver to use...
	I1206 09:12:38.693465  397455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:12:38.714463  397455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:12:38.731825  397455 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:12:38.731907  397455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:12:38.749950  397455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:12:38.766932  397455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:12:38.913090  397455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:12:39.116258  397455 docker.go:234] disabling docker service ...
	I1206 09:12:39.116351  397455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:12:39.132879  397455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:12:39.148524  397455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:12:39.302698  397455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:12:39.445663  397455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:12:39.461177  397455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:12:39.482688  397455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:12:39.482790  397455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.494228  397455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:12:39.494290  397455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.506627  397455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.518604  397455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.531385  397455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:12:39.544254  397455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.556809  397455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.576866  397455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:12:39.589032  397455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:12:39.599434  397455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 09:12:39.599507  397455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 09:12:39.619804  397455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:12:39.631689  397455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:39.768694  397455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:12:39.882663  397455 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:12:39.882794  397455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:12:39.888045  397455 start.go:564] Will wait 60s for crictl version
	I1206 09:12:39.888125  397455 ssh_runner.go:195] Run: which crictl
	I1206 09:12:39.891876  397455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:12:39.926288  397455 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:12:39.926416  397455 ssh_runner.go:195] Run: crio --version
	I1206 09:12:39.957157  397455 ssh_runner.go:195] Run: crio --version
	I1206 09:12:39.990070  397455 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1206 09:12:39.994329  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:39.994843  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:39.994883  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:39.995205  397455 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:12:40.000071  397455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:12:40.015283  397455 kubeadm.go:884] updating cluster {Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:12:40.015406  397455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:12:40.015449  397455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:12:40.044958  397455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 09:12:40.045030  397455 ssh_runner.go:195] Run: which lz4
	I1206 09:12:40.049475  397455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 09:12:40.054121  397455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 09:12:40.054153  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1206 09:12:41.262302  397455 crio.go:462] duration metric: took 1.212862096s to copy over tarball
	I1206 09:12:41.262407  397455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 09:12:42.660878  397455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.398437025s)
	I1206 09:12:42.660909  397455 crio.go:469] duration metric: took 1.398565722s to extract the tarball
	I1206 09:12:42.660917  397455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 09:12:42.697982  397455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:12:42.738954  397455 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:12:42.738983  397455 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:12:42.738993  397455 kubeadm.go:935] updating node { 192.168.39.249 8443 v1.34.2 crio true true} ...
	I1206 09:12:42.739090  397455 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-774690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:12:42.739165  397455 ssh_runner.go:195] Run: crio config
	I1206 09:12:42.786928  397455 cni.go:84] Creating CNI manager for ""
	I1206 09:12:42.786956  397455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:12:42.786981  397455 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:12:42.787012  397455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-774690 NodeName:addons-774690 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:12:42.787195  397455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-774690"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:12:42.787280  397455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:12:42.799481  397455 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:12:42.799561  397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:12:42.811281  397455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1206 09:12:42.832371  397455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:12:42.852252  397455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1206 09:12:42.872927  397455 ssh_runner.go:195] Run: grep 192.168.39.249	control-plane.minikube.internal$ /etc/hosts
	I1206 09:12:42.876971  397455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:12:42.891013  397455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:43.027643  397455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:12:43.056787  397455 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690 for IP: 192.168.39.249
	I1206 09:12:43.056818  397455 certs.go:195] generating shared ca certs ...
	I1206 09:12:43.056837  397455 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.057053  397455 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 09:12:43.191560  397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt ...
	I1206 09:12:43.191592  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt: {Name:mk73781a6e0b099870c6ec5e2b3d5f6976131c61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.191778  397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key ...
	I1206 09:12:43.191791  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key: {Name:mka4a65b4a64d945c4fff99c29e6abe899a87854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.191867  397455 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 09:12:43.242567  397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt ...
	I1206 09:12:43.242599  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt: {Name:mk5b017c0690420f6e772284318d221ff6ca606a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.242776  397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key ...
	I1206 09:12:43.242789  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key: {Name:mk26268c039405e81f93848b8003ab79c2f94036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.242858  397455 certs.go:257] generating profile certs ...
	I1206 09:12:43.242951  397455 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.key
	I1206 09:12:43.242971  397455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt with IP's: []
	I1206 09:12:43.285307  397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt ...
	I1206 09:12:43.285341  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: {Name:mkc19579499f5e8323c4e87c54d6b9bb0d613130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.285515  397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.key ...
	I1206 09:12:43.285527  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.key: {Name:mkeb31e4f2ec712c5cc198771ea9e70f2163d4ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.285599  397455 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d
	I1206 09:12:43.285618  397455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249]
	I1206 09:12:43.337889  397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d ...
	I1206 09:12:43.337922  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d: {Name:mkba3a702aa3f9201be378f2005263b993e3ba17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.338094  397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d ...
	I1206 09:12:43.338117  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d: {Name:mk043b1af64033ecc95a6f119f4ed39271950939 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.338197  397455 certs.go:382] copying /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt.a10d728d -> /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt
	I1206 09:12:43.338271  397455 certs.go:386] copying /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key.a10d728d -> /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key
	I1206 09:12:43.338322  397455 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key
	I1206 09:12:43.338341  397455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt with IP's: []
	I1206 09:12:43.459333  397455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt ...
	I1206 09:12:43.459365  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt: {Name:mk1a1a650f3f90b232c109baf3b368c83926b35e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.459582  397455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key ...
	I1206 09:12:43.459602  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key: {Name:mk34cfa33c32d5a45507e866f9c305c10bec11fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:43.459834  397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:12:43.459880  397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:12:43.459906  397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:12:43.459930  397455 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 09:12:43.460524  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:12:43.489859  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:12:43.517940  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:12:43.546521  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:12:43.575530  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:12:43.606164  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:12:43.637528  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:12:43.668279  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:12:43.698861  397455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:12:43.733919  397455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:12:43.761683  397455 ssh_runner.go:195] Run: openssl version
	I1206 09:12:43.768401  397455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:43.785336  397455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:12:43.797484  397455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:43.803117  397455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:43.803185  397455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:12:43.810527  397455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:12:43.822046  397455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:12:43.833465  397455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:12:43.838133  397455 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:12:43.838187  397455 kubeadm.go:401] StartCluster: {Name:addons-774690 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-774690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:12:43.838254  397455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:12:43.838316  397455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:12:43.870115  397455 cri.go:89] found id: ""
	I1206 09:12:43.870209  397455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:12:43.882255  397455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:12:43.893865  397455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:12:43.905404  397455 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:12:43.905427  397455 kubeadm.go:158] found existing configuration files:
	
	I1206 09:12:43.905474  397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:12:43.915893  397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:12:43.915959  397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:12:43.927341  397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:12:43.937764  397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:12:43.937842  397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:12:43.948955  397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:12:43.959817  397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:12:43.959898  397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:12:43.970828  397455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:12:43.981392  397455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:12:43.981461  397455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:12:43.992794  397455 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1206 09:12:44.138818  397455 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:12:56.006151  397455 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:12:56.006220  397455 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:12:56.006314  397455 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:12:56.006428  397455 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:12:56.006538  397455 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:12:56.006635  397455 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:12:56.008342  397455 out.go:252]   - Generating certificates and keys ...
	I1206 09:12:56.008444  397455 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:12:56.008527  397455 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:12:56.008630  397455 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:12:56.008734  397455 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:12:56.008849  397455 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:12:56.008952  397455 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:12:56.009038  397455 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:12:56.009202  397455 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-774690 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I1206 09:12:56.009270  397455 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:12:56.009427  397455 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-774690 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I1206 09:12:56.009516  397455 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:12:56.009624  397455 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:12:56.009696  397455 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:12:56.009801  397455 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:12:56.009887  397455 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:12:56.009963  397455 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:12:56.010037  397455 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:12:56.010132  397455 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:12:56.010213  397455 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:12:56.010314  397455 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:12:56.010412  397455 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:12:56.012213  397455 out.go:252]   - Booting up control plane ...
	I1206 09:12:56.012323  397455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:12:56.012418  397455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:12:56.012529  397455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:12:56.012680  397455 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:12:56.012828  397455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:12:56.012948  397455 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:12:56.013024  397455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:12:56.013057  397455 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:12:56.013223  397455 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:12:56.013308  397455 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:12:56.013354  397455 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0020806s
	I1206 09:12:56.013425  397455 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:12:56.013500  397455 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.249:8443/livez
	I1206 09:12:56.013578  397455 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:12:56.013640  397455 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:12:56.013723  397455 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.190831779s
	I1206 09:12:56.013780  397455 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.148908871s
	I1206 09:12:56.013839  397455 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502336925s
	I1206 09:12:56.013922  397455 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:12:56.014019  397455 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:12:56.014065  397455 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:12:56.014208  397455 kubeadm.go:319] [mark-control-plane] Marking the node addons-774690 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:12:56.014256  397455 kubeadm.go:319] [bootstrap-token] Using token: hq1x23.tb70g8aq8wzcy4j9
	I1206 09:12:56.015684  397455 out.go:252]   - Configuring RBAC rules ...
	I1206 09:12:56.015781  397455 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:12:56.015877  397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:12:56.016044  397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:12:56.016199  397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:12:56.016322  397455 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:12:56.016443  397455 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:12:56.016572  397455 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:12:56.016633  397455 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:12:56.016700  397455 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:12:56.016721  397455 kubeadm.go:319] 
	I1206 09:12:56.016808  397455 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:12:56.016821  397455 kubeadm.go:319] 
	I1206 09:12:56.016917  397455 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:12:56.016925  397455 kubeadm.go:319] 
	I1206 09:12:56.016958  397455 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:12:56.017042  397455 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:12:56.017112  397455 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:12:56.017121  397455 kubeadm.go:319] 
	I1206 09:12:56.017193  397455 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:12:56.017201  397455 kubeadm.go:319] 
	I1206 09:12:56.017266  397455 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:12:56.017275  397455 kubeadm.go:319] 
	I1206 09:12:56.017344  397455 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:12:56.017445  397455 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:12:56.017536  397455 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:12:56.017545  397455 kubeadm.go:319] 
	I1206 09:12:56.017623  397455 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:12:56.017694  397455 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:12:56.017700  397455 kubeadm.go:319] 
	I1206 09:12:56.017788  397455 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hq1x23.tb70g8aq8wzcy4j9 \
	I1206 09:12:56.017924  397455 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:94494b00f450bcad667cd30e10b7d2bac57a4f821af5dc44bcd0f6ad77a7145a \
	I1206 09:12:56.017964  397455 kubeadm.go:319] 	--control-plane 
	I1206 09:12:56.017974  397455 kubeadm.go:319] 
	I1206 09:12:56.018085  397455 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:12:56.018099  397455 kubeadm.go:319] 
	I1206 09:12:56.018208  397455 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hq1x23.tb70g8aq8wzcy4j9 \
	I1206 09:12:56.018349  397455 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:94494b00f450bcad667cd30e10b7d2bac57a4f821af5dc44bcd0f6ad77a7145a 
	I1206 09:12:56.018361  397455 cni.go:84] Creating CNI manager for ""
	I1206 09:12:56.018370  397455 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:12:56.020054  397455 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:12:56.021408  397455 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:12:56.044361  397455 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:12:56.069426  397455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:12:56.069543  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:56.069543  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-774690 minikube.k8s.io/updated_at=2025_12_06T09_12_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-774690 minikube.k8s.io/primary=true
	I1206 09:12:56.259391  397455 ops.go:34] apiserver oom_adj: -16
	I1206 09:12:56.259407  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:56.760231  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:57.259836  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:57.759671  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:58.260437  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:58.760533  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:59.260225  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:59.760067  397455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:12:59.883777  397455 kubeadm.go:1114] duration metric: took 3.814314606s to wait for elevateKubeSystemPrivileges
	I1206 09:12:59.883840  397455 kubeadm.go:403] duration metric: took 16.045655645s to StartCluster
	I1206 09:12:59.883872  397455 settings.go:142] acquiring lock: {Name:mk6aea9c06de6b4df1ec2e5d18bffa62e8a405af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:59.884053  397455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:12:59.884746  397455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/kubeconfig: {Name:mkde56684c6f903767a9ec1254dd48fbeb8e8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:59.884976  397455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:12:59.884993  397455 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 09:12:59.885079  397455 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:12:59.885194  397455 addons.go:70] Setting yakd=true in profile "addons-774690"
	I1206 09:12:59.885223  397455 addons.go:70] Setting inspektor-gadget=true in profile "addons-774690"
	I1206 09:12:59.885237  397455 addons.go:70] Setting metrics-server=true in profile "addons-774690"
	I1206 09:12:59.885249  397455 addons.go:239] Setting addon inspektor-gadget=true in "addons-774690"
	I1206 09:12:59.885258  397455 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-774690"
	I1206 09:12:59.885276  397455 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-774690"
	I1206 09:12:59.885273  397455 addons.go:70] Setting default-storageclass=true in profile "addons-774690"
	I1206 09:12:59.885303  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885315  397455 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-774690"
	I1206 09:12:59.885319  397455 addons.go:70] Setting registry-creds=true in profile "addons-774690"
	I1206 09:12:59.885329  397455 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-774690"
	I1206 09:12:59.885339  397455 addons.go:239] Setting addon registry-creds=true in "addons-774690"
	I1206 09:12:59.885335  397455 config.go:182] Loaded profile config "addons-774690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:12:59.885360  397455 addons.go:70] Setting ingress-dns=true in profile "addons-774690"
	I1206 09:12:59.885365  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885371  397455 addons.go:239] Setting addon ingress-dns=true in "addons-774690"
	I1206 09:12:59.885383  397455 addons.go:70] Setting volcano=true in profile "addons-774690"
	I1206 09:12:59.885347  397455 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-774690"
	I1206 09:12:59.885404  397455 addons.go:70] Setting volumesnapshots=true in profile "addons-774690"
	I1206 09:12:59.885415  397455 addons.go:239] Setting addon volumesnapshots=true in "addons-774690"
	I1206 09:12:59.885422  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885429  397455 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-774690"
	I1206 09:12:59.885436  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885449  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885309  397455 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-774690"
	I1206 09:12:59.885228  397455 addons.go:239] Setting addon yakd=true in "addons-774690"
	I1206 09:12:59.885931  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885329  397455 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-774690"
	I1206 09:12:59.886214  397455 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-774690"
	I1206 09:12:59.886273  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.886378  397455 addons.go:70] Setting gcp-auth=true in profile "addons-774690"
	I1206 09:12:59.886400  397455 mustload.go:66] Loading cluster: addons-774690
	I1206 09:12:59.886600  397455 config.go:182] Loaded profile config "addons-774690": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:12:59.885295  397455 addons.go:70] Setting storage-provisioner=true in profile "addons-774690"
	I1206 09:12:59.886679  397455 addons.go:239] Setting addon storage-provisioner=true in "addons-774690"
	I1206 09:12:59.886735  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885372  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885296  397455 addons.go:70] Setting registry=true in profile "addons-774690"
	I1206 09:12:59.887075  397455 addons.go:239] Setting addon registry=true in "addons-774690"
	I1206 09:12:59.885339  397455 addons.go:70] Setting cloud-spanner=true in profile "addons-774690"
	I1206 09:12:59.887129  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.887143  397455 addons.go:239] Setting addon cloud-spanner=true in "addons-774690"
	I1206 09:12:59.887170  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.885250  397455 addons.go:239] Setting addon metrics-server=true in "addons-774690"
	I1206 09:12:59.887644  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.887689  397455 out.go:179] * Verifying Kubernetes components...
	I1206 09:12:59.885394  397455 addons.go:239] Setting addon volcano=true in "addons-774690"
	I1206 09:12:59.887702  397455 addons.go:70] Setting ingress=true in profile "addons-774690"
	I1206 09:12:59.887739  397455 addons.go:239] Setting addon ingress=true in "addons-774690"
	I1206 09:12:59.887753  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.887771  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.889532  397455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:12:59.893518  397455 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:12:59.893598  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:12:59.893526  397455 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:12:59.893648  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:12:59.893681  397455 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-774690"
	I1206 09:12:59.894903  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.894965  397455 addons.go:239] Setting addon default-storageclass=true in "addons-774690"
	I1206 09:12:59.895006  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.895530  397455 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:12:59.895557  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:12:59.895612  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:12:59.895539  397455 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:12:59.895632  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:12:59.895542  397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:12:59.895821  397455 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:12:59.896278  397455 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:12:59.897205  397455 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:12:59.897217  397455 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:12:59.897218  397455 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:12:59.897209  397455 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:12:59.897208  397455 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1206 09:12:59.897676  397455 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 09:12:59.898084  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:12:59.898119  397455 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:12:59.898124  397455 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:12:59.898147  397455 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:12:59.899393  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:12:59.899057  397455 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:12:59.899800  397455 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:12:59.899956  397455 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:12:59.899980  397455 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:12:59.899997  397455 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:12:59.899112  397455 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:12:59.900580  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:12:59.900069  397455 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:12:59.900699  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:12:59.900070  397455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:12:59.900782  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:12:59.900133  397455 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:12:59.900817  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:12:59.901219  397455 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:12:59.902003  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:12:59.902009  397455 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:12:59.902027  397455 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:12:59.902882  397455 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:12:59.902893  397455 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:12:59.903642  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.904543  397455 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:12:59.904662  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.904692  397455 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:12:59.905016  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:12:59.905333  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:12:59.905760  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.906094  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.906335  397455 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:12:59.906357  397455 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:12:59.906824  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:12:59.906950  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.906693  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.907199  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.907887  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.907957  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.909424  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.909461  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.909754  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.910029  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.910554  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.911332  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.911344  397455 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:12:59.911438  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:12:59.911620  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.911411  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.911967  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.912103  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:12:59.912160  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.912358  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.912393  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.912517  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.912672  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.913186  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.913245  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.913272  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.913533  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.913570  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.913724  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.913895  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.913933  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.914045  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.914131  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.914402  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.914429  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.914459  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.914694  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:12:59.914769  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.914982  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.915311  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.915382  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.915407  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.915412  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.915695  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.915702  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.915913  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.916494  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.916527  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.916674  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.916813  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.917218  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.917252  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.917359  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:12:59.917451  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.917896  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.918336  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.918365  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.918505  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:12:59.920298  397455 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:12:59.921739  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:12:59.921759  397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:12:59.924322  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.924779  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:12:59.924805  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:12:59.924965  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	W1206 09:13:00.122937  397455 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60920->192.168.39.249:22: read: connection reset by peer
	I1206 09:13:00.122978  397455 retry.go:31] will retry after 175.899567ms: ssh: handshake failed: read tcp 192.168.39.1:60920->192.168.39.249:22: read: connection reset by peer
	W1206 09:13:00.123042  397455 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60930->192.168.39.249:22: read: connection reset by peer
	I1206 09:13:00.123047  397455 retry.go:31] will retry after 182.601016ms: ssh: handshake failed: read tcp 192.168.39.1:60930->192.168.39.249:22: read: connection reset by peer
	I1206 09:13:00.273021  397455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:13:00.273105  397455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:13:00.360334  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:13:00.404407  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:13:00.459531  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:13:00.460504  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:13:00.485891  397455 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:13:00.485934  397455 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:13:00.547022  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:13:00.556989  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:13:00.563771  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:13:00.575670  397455 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:13:00.575691  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:13:00.579135  397455 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:13:00.579151  397455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:13:00.583643  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:13:00.632301  397455 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:13:00.632331  397455 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:13:00.763630  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:13:00.842518  397455 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:13:00.842558  397455 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:13:00.857985  397455 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:13:00.858016  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:13:00.939020  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:13:00.973678  397455 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:13:00.973730  397455 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:13:01.018413  397455 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:13:01.018490  397455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:13:01.150159  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:13:01.163543  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:13:01.163619  397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:13:01.492871  397455 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:13:01.492906  397455 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:13:01.697315  397455 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:13:01.697348  397455 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:13:01.855900  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:13:01.855928  397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:13:01.891651  397455 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:13:01.891684  397455 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:13:02.102737  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:13:02.509928  397455 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:13:02.509959  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:13:02.596588  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:13:02.596637  397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:13:02.601077  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:13:02.601113  397455 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:13:03.113486  397455 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:03.113526  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:13:03.143545  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:13:03.143583  397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:13:03.143747  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:13:03.409242  397455 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:13:03.409274  397455 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:13:03.458076  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:04.015392  397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:13:04.015418  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:13:04.025004  397455 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.7518581s)
	I1206 09:13:04.025060  397455 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1206 09:13:04.025104  397455 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.752046746s)
	I1206 09:13:04.025166  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.664794059s)
	I1206 09:13:04.025995  397455 node_ready.go:35] waiting up to 6m0s for node "addons-774690" to be "Ready" ...
	I1206 09:13:04.035138  397455 node_ready.go:49] node "addons-774690" is "Ready"
	I1206 09:13:04.035170  397455 node_ready.go:38] duration metric: took 9.143113ms for node "addons-774690" to be "Ready" ...
	I1206 09:13:04.035185  397455 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:13:04.035233  397455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:13:04.281009  397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:13:04.281044  397455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:13:04.544358  397455 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-774690" context rescaled to 1 replicas
	I1206 09:13:04.618647  397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:13:04.618672  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:13:04.711020  397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:13:04.711041  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:13:04.833043  397455 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:13:04.833072  397455 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:13:05.253178  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:13:06.383133  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.978682334s)
	I1206 09:13:07.418825  397455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:13:07.421732  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:13:07.422152  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:13:07.422191  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:13:07.422343  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:13:07.698688  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.239105307s)
	I1206 09:13:07.698776  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.23824231s)
	I1206 09:13:07.698821  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.151758924s)
	I1206 09:13:07.698927  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.141901203s)
	I1206 09:13:07.698973  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.135167948s)
	W1206 09:13:07.827335  397455 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1206 09:13:07.830544  397455 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:13:07.952723  397455 addons.go:239] Setting addon gcp-auth=true in "addons-774690"
	I1206 09:13:07.952808  397455 host.go:66] Checking if "addons-774690" exists ...
	I1206 09:13:07.954704  397455 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:13:07.957383  397455 main.go:143] libmachine: domain addons-774690 has defined MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:13:07.957831  397455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:02:15:5c", ip: ""} in network mk-addons-774690: {Iface:virbr1 ExpiryTime:2025-12-06 10:12:36 +0000 UTC Type:0 Mac:52:54:00:02:15:5c Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-774690 Clientid:01:52:54:00:02:15:5c}
	I1206 09:13:07.957855  397455 main.go:143] libmachine: domain addons-774690 has defined IP address 192.168.39.249 and MAC address 52:54:00:02:15:5c in network mk-addons-774690
	I1206 09:13:07.958039  397455 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/addons-774690/id_rsa Username:docker}
	I1206 09:13:08.053226  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.469550845s)
	I1206 09:13:08.053330  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.289659335s)
	I1206 09:13:09.923981  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.984907982s)
	I1206 09:13:09.924043  397455 addons.go:495] Verifying addon ingress=true in "addons-774690"
	I1206 09:13:09.924066  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.773856843s)
	I1206 09:13:09.924096  397455 addons.go:495] Verifying addon registry=true in "addons-774690"
	I1206 09:13:09.924127  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.821359574s)
	I1206 09:13:09.924220  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.780442362s)
	I1206 09:13:09.924225  397455 addons.go:495] Verifying addon metrics-server=true in "addons-774690"
	I1206 09:13:09.924368  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.466253199s)
	W1206 09:13:09.924405  397455 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:13:09.924437  397455 retry.go:31] will retry after 283.145717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:13:09.924449  397455 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.889196408s)
	I1206 09:13:09.924484  397455 api_server.go:72] duration metric: took 10.039466985s to wait for apiserver process to appear ...
	I1206 09:13:09.924496  397455 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:13:09.924520  397455 api_server.go:253] Checking apiserver healthz at https://192.168.39.249:8443/healthz ...
	I1206 09:13:09.925953  397455 out.go:179] * Verifying ingress addon...
	I1206 09:13:09.926968  397455 out.go:179] * Verifying registry addon...
	I1206 09:13:09.926967  397455 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-774690 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:13:09.929115  397455 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:13:09.930609  397455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:13:09.965796  397455 api_server.go:279] https://192.168.39.249:8443/healthz returned 200:
	ok
	I1206 09:13:09.966231  397455 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:13:09.966245  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:09.966535  397455 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:13:09.966556  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:09.970498  397455 api_server.go:141] control plane version: v1.34.2
	I1206 09:13:09.970534  397455 api_server.go:131] duration metric: took 46.031492ms to wait for apiserver health ...
	I1206 09:13:09.970544  397455 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 09:13:09.985945  397455 system_pods.go:59] 17 kube-system pods found
	I1206 09:13:09.985990  397455 system_pods.go:61] "amd-gpu-device-plugin-svq5h" [ff554a2a-f7e8-4581-b0cc-821075d441f9] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:09.986002  397455 system_pods.go:61] "coredns-66bc5c9577-l9grt" [3c33d79c-6db7-4610-b394-d2b81216197d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:09.986019  397455 system_pods.go:61] "coredns-66bc5c9577-sgm5h" [0e85b90c-8f6b-4208-8699-b3dc97355093] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:09.986025  397455 system_pods.go:61] "etcd-addons-774690" [034cb1f2-61eb-401d-8bd4-dc4065130f57] Running
	I1206 09:13:09.986031  397455 system_pods.go:61] "kube-apiserver-addons-774690" [4b7b72ad-0e63-49b0-bcd7-2027061e77e7] Running
	I1206 09:13:09.986036  397455 system_pods.go:61] "kube-controller-manager-addons-774690" [045b5ffd-5313-43cc-8751-0d3927a9dd20] Running
	I1206 09:13:09.986044  397455 system_pods.go:61] "kube-ingress-dns-minikube" [4117e868-9c8a-440e-9af2-45709b4fbdc3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:09.986049  397455 system_pods.go:61] "kube-proxy-jzp4f" [df1c8ffd-d67f-46c3-aec5-6a7b099bce49] Running
	I1206 09:13:09.986055  397455 system_pods.go:61] "kube-scheduler-addons-774690" [105e520d-94c8-47b5-958a-679d16b36726] Running
	I1206 09:13:09.986063  397455 system_pods.go:61] "metrics-server-85b7d694d7-clrcl" [34e1f363-ac29-415d-89c3-bfe4ac513e1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:09.986074  397455 system_pods.go:61] "nvidia-device-plugin-daemonset-vdltq" [6bd89c20-b241-4230-9f16-b5904f3e8fd6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:09.986080  397455 system_pods.go:61] "registry-6b586f9694-4gkjr" [0b1de7e3-a280-4a46-a545-e46a47e746b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:09.986092  397455 system_pods.go:61] "registry-creds-764b6fb674-m55kh" [40957898-1473-4039-aeb6-a7ece80be295] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:09.986101  397455 system_pods.go:61] "registry-proxy-t6flj" [50457566-2e31-43a8-9fba-b01c71f057b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:09.986110  397455 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nld6r" [c9b5706d-2f99-4a13-aa27-d1cd48aa900b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:09.986118  397455 system_pods.go:61] "snapshot-controller-7d9fbc56b8-nmfz4" [51917eb2-3eac-4a48-9c5d-7f87daa63579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:09.986127  397455 system_pods.go:61] "storage-provisioner" [d85c1bd3-4a0c-4397-9c7d-4cb74f18e187] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:09.986135  397455 system_pods.go:74] duration metric: took 15.584121ms to wait for pod list to return data ...
	I1206 09:13:09.986149  397455 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:13:10.003134  397455 default_sa.go:45] found service account: "default"
	I1206 09:13:10.003161  397455 default_sa.go:55] duration metric: took 17.006599ms for default service account to be created ...
	I1206 09:13:10.003171  397455 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:13:10.081935  397455 system_pods.go:86] 17 kube-system pods found
	I1206 09:13:10.081972  397455 system_pods.go:89] "amd-gpu-device-plugin-svq5h" [ff554a2a-f7e8-4581-b0cc-821075d441f9] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:13:10.081980  397455 system_pods.go:89] "coredns-66bc5c9577-l9grt" [3c33d79c-6db7-4610-b394-d2b81216197d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:10.081989  397455 system_pods.go:89] "coredns-66bc5c9577-sgm5h" [0e85b90c-8f6b-4208-8699-b3dc97355093] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:13:10.081993  397455 system_pods.go:89] "etcd-addons-774690" [034cb1f2-61eb-401d-8bd4-dc4065130f57] Running
	I1206 09:13:10.081999  397455 system_pods.go:89] "kube-apiserver-addons-774690" [4b7b72ad-0e63-49b0-bcd7-2027061e77e7] Running
	I1206 09:13:10.082002  397455 system_pods.go:89] "kube-controller-manager-addons-774690" [045b5ffd-5313-43cc-8751-0d3927a9dd20] Running
	I1206 09:13:10.082008  397455 system_pods.go:89] "kube-ingress-dns-minikube" [4117e868-9c8a-440e-9af2-45709b4fbdc3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:13:10.082011  397455 system_pods.go:89] "kube-proxy-jzp4f" [df1c8ffd-d67f-46c3-aec5-6a7b099bce49] Running
	I1206 09:13:10.082015  397455 system_pods.go:89] "kube-scheduler-addons-774690" [105e520d-94c8-47b5-958a-679d16b36726] Running
	I1206 09:13:10.082020  397455 system_pods.go:89] "metrics-server-85b7d694d7-clrcl" [34e1f363-ac29-415d-89c3-bfe4ac513e1f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:13:10.082025  397455 system_pods.go:89] "nvidia-device-plugin-daemonset-vdltq" [6bd89c20-b241-4230-9f16-b5904f3e8fd6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:13:10.082034  397455 system_pods.go:89] "registry-6b586f9694-4gkjr" [0b1de7e3-a280-4a46-a545-e46a47e746b0] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:13:10.082042  397455 system_pods.go:89] "registry-creds-764b6fb674-m55kh" [40957898-1473-4039-aeb6-a7ece80be295] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:13:10.082046  397455 system_pods.go:89] "registry-proxy-t6flj" [50457566-2e31-43a8-9fba-b01c71f057b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:13:10.082052  397455 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nld6r" [c9b5706d-2f99-4a13-aa27-d1cd48aa900b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:10.082060  397455 system_pods.go:89] "snapshot-controller-7d9fbc56b8-nmfz4" [51917eb2-3eac-4a48-9c5d-7f87daa63579] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:13:10.082065  397455 system_pods.go:89] "storage-provisioner" [d85c1bd3-4a0c-4397-9c7d-4cb74f18e187] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 09:13:10.082074  397455 system_pods.go:126] duration metric: took 78.896787ms to wait for k8s-apps to be running ...
	I1206 09:13:10.082082  397455 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:13:10.082134  397455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:13:10.208028  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:13:10.453451  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:10.464841  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:10.833442  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.580203929s)
	I1206 09:13:10.833487  397455 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-774690"
	I1206 09:13:10.833520  397455 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.878764926s)
	I1206 09:13:10.833583  397455 system_svc.go:56] duration metric: took 751.493295ms WaitForService to wait for kubelet
	I1206 09:13:10.833657  397455 kubeadm.go:587] duration metric: took 10.948634384s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:13:10.833682  397455 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:13:10.835084  397455 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:13:10.835089  397455 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:13:10.836527  397455 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:13:10.837103  397455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:13:10.838023  397455 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:13:10.838041  397455 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:13:10.884334  397455 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 09:13:10.884372  397455 node_conditions.go:123] node cpu capacity is 2
	I1206 09:13:10.884394  397455 node_conditions.go:105] duration metric: took 50.706247ms to run NodePressure ...
	I1206 09:13:10.884412  397455 start.go:242] waiting for startup goroutines ...
	I1206 09:13:10.884791  397455 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:13:10.884814  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:10.935048  397455 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:13:10.935075  397455 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:13:10.959927  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:10.960804  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.019995  397455 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:13:11.020021  397455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:13:11.128849  397455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:13:11.353635  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:11.454088  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.454913  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:11.844527  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:11.937467  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:11.939243  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:12.183572  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.975483981s)
	I1206 09:13:12.362367  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:12.463525  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:12.465243  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:12.644464  397455 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.51554942s)
	I1206 09:13:12.645693  397455 addons.go:495] Verifying addon gcp-auth=true in "addons-774690"
	I1206 09:13:12.647462  397455 out.go:179] * Verifying gcp-auth addon...
	I1206 09:13:12.649999  397455 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:13:12.664141  397455 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:13:12.664164  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:12.842946  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:12.944351  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:12.944642  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:13.154573  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:13.342399  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:13.433875  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:13.437678  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:13.657220  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:13.841914  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:13.936545  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:13.937140  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:14.155091  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:14.342234  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:14.436445  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:14.440494  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:14.654894  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:14.844992  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:14.941072  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:14.944001  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:15.157755  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:15.341254  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:15.434388  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:15.435385  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:15.656059  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:15.844470  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:15.934566  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:15.935986  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.155344  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:16.341146  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:16.435318  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:16.437469  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.653920  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:16.843354  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:16.944302  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:16.944535  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.155368  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:17.341266  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:17.433510  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.434095  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:17.653622  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:17.841178  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:17.933380  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:17.934632  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:18.155823  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:18.341668  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:18.432873  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:18.434911  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:18.654384  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:18.850890  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:18.935055  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:18.937443  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:19.155333  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:19.343012  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:19.433738  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:19.436739  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:19.656734  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:19.842522  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:19.933071  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:19.937980  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:20.153505  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:20.342172  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:20.435439  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:20.435450  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.655757  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:20.842669  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:20.934149  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:20.937035  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.156829  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:21.343321  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:21.434421  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:21.435556  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:21.653389  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:21.841010  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:21.943329  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:21.945941  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:22.154252  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:22.342212  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:22.434233  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:22.436020  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:22.654204  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:22.841083  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:22.934282  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:22.934316  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:23.154928  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:23.341922  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:23.433299  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:23.434357  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:23.653585  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:23.841042  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:23.933804  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:23.936535  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:24.157689  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:24.341759  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:24.433537  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:24.434235  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:24.654804  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:24.844232  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:24.935641  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:24.937079  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:25.156488  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:25.343562  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:25.432651  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:25.436777  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:25.657236  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:25.840491  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:25.932300  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:25.935655  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:26.157124  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:26.341449  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:26.433830  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:26.439218  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:26.655212  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:26.841409  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:26.937972  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:26.938002  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:27.154065  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:27.340746  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:27.433761  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:27.434572  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:27.654305  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:27.842533  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:27.932737  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:27.935629  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:28.159414  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:28.340654  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:28.435049  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:28.436446  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:28.657604  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:28.843426  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:28.932925  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:28.937016  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:29.153938  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:29.342945  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:29.436165  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:29.436972  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.653276  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:29.841058  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:29.934940  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:29.936591  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:30.156434  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:30.350184  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.436325  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:30.436509  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:30.655073  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:30.841239  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:30.932912  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:30.936911  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:31.328104  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:31.512626  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:31.512896  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:31.514843  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:31.655408  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:31.842650  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:31.936560  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:31.938777  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.155781  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:32.342139  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:32.433866  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:32.436661  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.654426  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:32.841583  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:32.934281  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:32.935303  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.154816  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:33.342374  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:33.441312  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.441679  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:33.654282  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:33.840530  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:33.932253  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:33.934146  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:34.153572  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:34.341201  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:34.435867  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:34.436100  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:34.653878  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:34.844037  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.088587  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.089962  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:35.166941  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:35.343119  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.436644  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:35.437379  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.658258  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:35.841405  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:35.932975  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:35.935339  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.157666  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:36.342304  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:36.435836  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:36.435873  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.655493  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:36.841903  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:36.942797  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:36.943206  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.154007  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:37.341988  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:37.433819  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.435500  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:37.655102  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:37.840755  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:37.933857  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:37.934809  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:38.156357  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:38.341230  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:38.433536  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:38.435509  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:38.654085  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:38.840230  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:38.934394  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:38.934898  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:39.154481  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:39.345695  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:39.438043  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:39.439582  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:39.655939  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:39.843837  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:39.934407  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:39.934799  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:40.155882  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:40.342439  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:40.434552  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:40.436513  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:13:40.656856  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:40.843014  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:40.942120  397455 kapi.go:107] duration metric: took 31.011508655s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:13:40.942173  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:41.153358  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:41.341474  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:41.435387  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:41.654296  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:41.842232  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:41.933430  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.155127  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:42.342231  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:42.433875  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:42.654448  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:42.841923  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:42.934580  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:43.157894  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:43.342479  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:43.442287  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:43.653841  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:43.841968  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:43.933453  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.155939  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:44.341881  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:44.434232  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:44.653984  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:44.842379  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:44.933274  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:45.158434  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:45.343268  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:45.435036  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:45.654268  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:45.843601  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:45.937531  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:46.158633  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:46.343197  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:46.434383  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:46.657622  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:46.841546  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:46.933744  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:47.155295  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:47.340726  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:47.433147  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:47.679775  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:47.844215  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:47.932229  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:48.153335  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:48.343412  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.442844  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:48.654193  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:48.841737  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:48.933431  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:49.153544  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:49.341096  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:49.433393  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:49.653674  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:49.843026  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:49.933149  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:50.152820  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:50.341510  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:50.434798  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:50.653790  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:50.840694  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:50.934067  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:51.153791  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:51.343894  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:51.433814  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:51.657878  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:51.844981  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:51.935583  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:52.157413  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:52.343369  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:52.433631  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:52.654907  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:52.842794  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:52.933998  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:53.158377  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:53.341420  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:53.432910  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:53.654289  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:53.848496  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:53.947956  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:54.157275  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:54.341620  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:54.445764  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:54.654583  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:54.858389  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:54.933061  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:55.154206  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:55.344563  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:55.433949  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:55.655056  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:55.845035  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:55.934401  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:56.154361  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:56.356851  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:56.434550  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:56.655674  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:56.845028  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:56.936071  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:57.153645  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:57.342240  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:57.434158  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:57.653725  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:57.842251  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:57.932633  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:58.156431  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:58.344070  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:58.436446  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:58.654404  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:58.842091  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:58.933769  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:59.154651  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:59.341162  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:59.433385  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:13:59.654822  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:13:59.855120  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:13:59.941926  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:00.157986  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:00.342759  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:00.432512  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:00.654306  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:00.852249  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:00.936314  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:01.156835  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:01.346521  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:01.432419  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:01.657528  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:01.930220  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:01.933339  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:02.156085  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:02.341440  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:02.432869  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:02.653811  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:02.842861  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:02.936761  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:03.157562  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:03.345683  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:03.527854  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:03.655379  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:03.841313  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:03.933469  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:04.158411  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:04.341909  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:04.434907  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:04.654697  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:04.841413  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:04.934139  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:05.155216  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:05.345325  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:05.445627  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:05.655869  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:05.842607  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:05.933955  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:06.160457  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:06.341394  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:06.434638  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:06.654922  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:06.841890  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:06.941947  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:07.156155  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:07.341351  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:07.433703  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:07.654704  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:07.844341  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:07.932338  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:08.165063  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:08.343552  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:08.432244  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:08.654382  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:08.841443  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:08.934002  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:09.155938  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:09.345005  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:09.444647  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:09.657571  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:09.840979  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:09.934835  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:10.410416  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:10.411382  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:10.433283  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:10.653418  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:10.841640  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:10.932769  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:11.157857  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:11.341519  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:11.432818  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:11.654184  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:11.840821  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:11.933753  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:12.155549  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:12.341471  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:12.433819  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:12.654870  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:12.841100  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:14:12.933642  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:13.153722  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:13.341960  397455 kapi.go:107] duration metric: took 1m2.504850074s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:14:13.433237  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:13.654464  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:13.934770  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:14.154345  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:14.433829  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:14.656438  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:14.933533  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:15.154507  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:15.434111  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:15.655691  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:15.934680  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:16.156669  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:16.436962  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:16.662317  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:16.934638  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:17.156462  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:17.438444  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:17.662505  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:17.936294  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:18.359679  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:18.433774  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:18.653860  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:18.934972  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:19.157180  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:19.614949  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:19.716171  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:19.933418  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:20.156122  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:20.435747  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:20.656787  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:20.933784  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:21.159149  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:21.433620  397455 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:14:21.654380  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:21.933098  397455 kapi.go:107] duration metric: took 1m12.003988282s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:14:22.153481  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:22.654700  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:23.153403  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:23.654757  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:24.155166  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:24.656323  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:25.155051  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:25.656766  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:26.156682  397455 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:14:26.654063  397455 kapi.go:107] duration metric: took 1m14.004061087s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:14:26.655800  397455 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-774690 cluster.
	I1206 09:14:26.657177  397455 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:14:26.658365  397455 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:14:26.659818  397455 out.go:179] * Enabled addons: registry-creds, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1206 09:14:26.661096  397455 addons.go:530] duration metric: took 1m26.776022692s for enable addons: enabled=[registry-creds storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget nvidia-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1206 09:14:26.661145  397455 start.go:247] waiting for cluster config update ...
	I1206 09:14:26.661168  397455 start.go:256] writing updated cluster config ...
	I1206 09:14:26.661487  397455 ssh_runner.go:195] Run: rm -f paused
	I1206 09:14:26.668181  397455 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:14:26.673234  397455 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l9grt" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:26.681163  397455 pod_ready.go:94] pod "coredns-66bc5c9577-l9grt" is "Ready"
	I1206 09:14:26.681211  397455 pod_ready.go:86] duration metric: took 7.944214ms for pod "coredns-66bc5c9577-l9grt" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:26.684186  397455 pod_ready.go:83] waiting for pod "etcd-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:26.689754  397455 pod_ready.go:94] pod "etcd-addons-774690" is "Ready"
	I1206 09:14:26.689788  397455 pod_ready.go:86] duration metric: took 5.579272ms for pod "etcd-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:26.691762  397455 pod_ready.go:83] waiting for pod "kube-apiserver-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:26.697701  397455 pod_ready.go:94] pod "kube-apiserver-addons-774690" is "Ready"
	I1206 09:14:26.697741  397455 pod_ready.go:86] duration metric: took 5.961081ms for pod "kube-apiserver-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:26.704301  397455 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:27.073578  397455 pod_ready.go:94] pod "kube-controller-manager-addons-774690" is "Ready"
	I1206 09:14:27.073608  397455 pod_ready.go:86] duration metric: took 369.279767ms for pod "kube-controller-manager-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:27.274390  397455 pod_ready.go:83] waiting for pod "kube-proxy-jzp4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:27.674174  397455 pod_ready.go:94] pod "kube-proxy-jzp4f" is "Ready"
	I1206 09:14:27.674209  397455 pod_ready.go:86] duration metric: took 399.791957ms for pod "kube-proxy-jzp4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:27.873006  397455 pod_ready.go:83] waiting for pod "kube-scheduler-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:28.273339  397455 pod_ready.go:94] pod "kube-scheduler-addons-774690" is "Ready"
	I1206 09:14:28.273368  397455 pod_ready.go:86] duration metric: took 400.335134ms for pod "kube-scheduler-addons-774690" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:14:28.273380  397455 pod_ready.go:40] duration metric: took 1.60514786s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:14:28.320968  397455 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:14:28.322740  397455 out.go:179] * Done! kubectl is now configured to use "addons-774690" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.181767409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659181742096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=239eed1c-6e60-42df-806b-1bc9bded394a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.182875640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01816479-5c92-4364-833d-f63e3df18f53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.182934683Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01816479-5c92-4364-833d-f63e3df18f53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.183229731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01816479-5c92-4364-833d-f63e3df18f53 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.218609594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a1231e95-2d62-4ec0-861f-7fcd3e8bee11 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.218694527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a1231e95-2d62-4ec0-861f-7fcd3e8bee11 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.220655977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e430b55-9425-4cd5-94b3-74cf18a9bc8a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.221925855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659221895773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e430b55-9425-4cd5-94b3-74cf18a9bc8a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.223187237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8836c094-defa-463e-a0c9-0d0b0b5d0f0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.223301871Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8836c094-defa-463e-a0c9-0d0b0b5d0f0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.223673975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8836c094-defa-463e-a0c9-0d0b0b5d0f0a name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.254701849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7b12f2ee-cd57-4a53-bc5b-c7c2ba61ede7 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.254790319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7b12f2ee-cd57-4a53-bc5b-c7c2ba61ede7 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.256711558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb907702-f28f-4e64-870a-64d3a48a56f5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.258325321Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659258229913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb907702-f28f-4e64-870a-64d3a48a56f5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.259404440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bfb6d2ed-5bb9-40dd-b5a9-228e258bccd4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.259543553Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bfb6d2ed-5bb9-40dd-b5a9-228e258bccd4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.260543804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bfb6d2ed-5bb9-40dd-b5a9-228e258bccd4 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.293170170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a62433ef-2eca-4419-b418-a4d16b85bcd0 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.293259732Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a62433ef-2eca-4419-b418-a4d16b85bcd0 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.295210962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=996d5602-b77d-4ebe-b3b6-4dd54d6dd333 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.296599505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765012659296563289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585488,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=996d5602-b77d-4ebe-b3b6-4dd54d6dd333 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.297940744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=94746b33-f7dc-4546-aac4-8502f7e7f59c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.298002143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=94746b33-f7dc-4546-aac4-8502f7e7f59c name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:17:39 addons-774690 crio[800]: time="2025-12-06 09:17:39.298326589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9e1cee18d79c87bdb4fdcf0e2d5c674f013ebf86580772154072e1ed786f7ed7,PodSandboxId:71fcc4f6cb756a525536deb6b9d97220e091c16afb8f0ca488d3de14c216af5a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765012518156891547,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ea010f3e-0b70-4331-8ef2-e8dbeb8da0dd,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3b54b242a4dbc9a6b8480824017e9c9c6efa05164c2e351137352cb17cd6cc,PodSandboxId:ffb5e4f0851d0f7a56808138790d5472437aeb9761b487e1720f6c2db147a419,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765012472828861850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ccc10db2-3a00-4383-80ab-805fd3af8161,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b6f65a7e0016613d05aaf2daae13186f051dd1e1e72fc6802d5acdd53421dea,PodSandboxId:8afa3f705b0c6e4feab3450eb8883f9cbb51b27fb57059af1873c5b0173db425,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765012461241906908,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-cghl5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: da20bd88-903b-4cb3-bfa2-e07ba41ddf78,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5032adb8f732b213034bf5c01beb4d8a43caf71af8b71077d79ed631659e35d8,PodSandboxId:196c96f53ebd7eca5c62ef767a5585d3332a9690fb78d1a9c5753662a96715b7,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434467670209,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wjfhp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e654e7d8-13a2-47be-a4f3-2e26e6350997,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851074b8cff4771f8483139eed4b1df8fd501a563dd6576daaaa92457d4bd4a,PodSandboxId:1e9c8ab1f4f0a62113356cd2c2f5dbdb22a363ce60276bd95f12a6ac531365ba,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765012434346239312,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4c946,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: be11f3da-9722-453a-835b-e18b8f03516c,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476a9346a94d5e97321a7dad3b910fafc5ddb361d5194546e8b7203e9348e5ea,PodSandboxId:c7bb2700615f9b21f09b213f2165bf5bf18924a2725583d502f86217d81cb694,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765012412119893295,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4117e868-9c8a-440e-9af2-45709b4fbdc3,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35444c80ca7de2189bace6e1fa56dbf377fff82be49890c85053e4d3183ce534,PodSandboxId:3acadb9fd10d100d5e337ab828043990d404ee560df077b0eaff596bf8c88e82,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765012389336047518,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-svq5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff554a2a-f7e8-4581-b0cc-821075d441f9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f,PodSandboxId:31e85e1cb6b9807622c540332611d99b9261a6273e7058205a0ba0292d86a79f,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765012388959116107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d85c1bd3-4a0c-4397-9c7d-4cb74f18e187,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d,PodSandboxId:fe6d9f5f0c70386cc189f4d1509d794f0ed1542d0a663567fec6acbf84c47c3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765012382415682102,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-l9grt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c33d79c-6db7-4610-b394-d2b81216197d,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144,PodSandboxId:d90dd60e67b985e3e6869abab033af7459d9a60035ae735e6a1da4afeef2f574,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765012381813311355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jzp4f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df1c8ffd-d67f-46c3-aec5-6a7b099bce49,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829,PodSandboxId:ab48cac50ef3adebb02fcd7be63a03640d303a6d5f911d3a50d60bbaae6e3d70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765012368917345556,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 549fd7c125f874ea8194dda0339bd0ad,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5,PodSandboxId:69abe07fabddb33f62a7450189eedce6dc9ae410a2aca409985fc6a444f396d7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765012368908698593,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c042aa351a5b570e306966a6f284a804,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-p
ort\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78,PodSandboxId:c3fa7030d6163b5a793cbf96f4803e4b43dfe48b945917fe7b354987e20ca53e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765012368861531869,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df90b68ae6a59daaf09af3b96ff025b7,},Annotations:map[
string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb,PodSandboxId:b03ab93a9ca6007af5cfc2bf48cdead893b9ed565c7c7a9e99e2b0374799ef1c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765012368854172850,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manage
r-addons-774690,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 054878d143440cf1165e963a55f38038,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=94746b33-f7dc-4546-aac4-8502f7e7f59c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	9e1cee18d79c8       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   71fcc4f6cb756       nginx                                      default
	4c3b54b242a4d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   ffb5e4f0851d0       busybox                                    default
	0b6f65a7e0016       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   8afa3f705b0c6       ingress-nginx-controller-6c8bf45fb-cghl5   ingress-nginx
	5032adb8f732b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              patch                     0                   196c96f53ebd7       ingress-nginx-admission-patch-wjfhp        ingress-nginx
	c851074b8cff4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   1e9c8ab1f4f0a       ingress-nginx-admission-create-4c946       ingress-nginx
	476a9346a94d5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   c7bb2700615f9       kube-ingress-dns-minikube                  kube-system
	35444c80ca7de       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   3acadb9fd10d1       amd-gpu-device-plugin-svq5h                kube-system
	194a16bb3f558       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   31e85e1cb6b98       storage-provisioner                        kube-system
	c52ec85be8f08       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   fe6d9f5f0c703       coredns-66bc5c9577-l9grt                   kube-system
	14cd309852195       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   d90dd60e67b98       kube-proxy-jzp4f                           kube-system
	2cbfd50881df9       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   ab48cac50ef3a       kube-scheduler-addons-774690               kube-system
	d60b33a68a977       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   69abe07fabddb       etcd-addons-774690                         kube-system
	0cafa29f45be5       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   c3fa7030d6163       kube-apiserver-addons-774690               kube-system
	897c83e9715cf       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   b03ab93a9ca60       kube-controller-manager-addons-774690      kube-system
	
	
	==> coredns [c52ec85be8f0804ab5cfb12ca329e31a05a691124b480be6dad48aaf8b57dd5d] <==
	[INFO] 10.244.0.8:38479 - 22161 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.0000837s
	[INFO] 10.244.0.8:38479 - 921 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000530187s
	[INFO] 10.244.0.8:38479 - 33226 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000134107s
	[INFO] 10.244.0.8:38479 - 27822 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00009165s
	[INFO] 10.244.0.8:38479 - 30955 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119976s
	[INFO] 10.244.0.8:38479 - 57768 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000097078s
	[INFO] 10.244.0.8:38479 - 34237 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000150233s
	[INFO] 10.244.0.8:54447 - 25639 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000354338s
	[INFO] 10.244.0.8:54447 - 25349 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000384249s
	[INFO] 10.244.0.8:57241 - 24735 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000477066s
	[INFO] 10.244.0.8:57241 - 24981 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000229799s
	[INFO] 10.244.0.8:34151 - 4730 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081528s
	[INFO] 10.244.0.8:34151 - 5009 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000249135s
	[INFO] 10.244.0.8:45428 - 59898 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000223567s
	[INFO] 10.244.0.8:45428 - 60072 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000067055s
	[INFO] 10.244.0.23:33031 - 61050 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000428534s
	[INFO] 10.244.0.23:43838 - 37140 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000514283s
	[INFO] 10.244.0.23:38503 - 47608 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124935s
	[INFO] 10.244.0.23:46672 - 12201 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116581s
	[INFO] 10.244.0.23:51075 - 19560 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090169s
	[INFO] 10.244.0.23:40804 - 24550 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090307s
	[INFO] 10.244.0.23:36804 - 40666 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001298056s
	[INFO] 10.244.0.23:54972 - 54554 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.001095651s
	[INFO] 10.244.0.28:42545 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000593436s
	[INFO] 10.244.0.28:53364 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000132493s
	
	
	==> describe nodes <==
	Name:               addons-774690
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-774690
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-774690
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_12_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-774690
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-774690
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:15:59 +0000   Sat, 06 Dec 2025 09:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:15:59 +0000   Sat, 06 Dec 2025 09:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:15:59 +0000   Sat, 06 Dec 2025 09:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:15:59 +0000   Sat, 06 Dec 2025 09:12:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.249
	  Hostname:    addons-774690
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 6637641e43854e2fbcf4adf9edc82956
	  System UUID:                6637641e-4385-4e2f-bcf4-adf9edc82956
	  Boot ID:                    a93b70ca-ecc7-4c42-93b3-1bf205cb601f
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     hello-world-app-5d498dc89-twkk5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-cghl5    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m30s
	  kube-system                 amd-gpu-device-plugin-svq5h                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 coredns-66bc5c9577-l9grt                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m38s
	  kube-system                 etcd-addons-774690                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m44s
	  kube-system                 kube-apiserver-addons-774690                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-controller-manager-addons-774690       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-proxy-jzp4f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-scheduler-addons-774690                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m36s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node addons-774690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node addons-774690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x7 over 4m52s)  kubelet          Node addons-774690 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m44s                  kubelet          Node addons-774690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m44s                  kubelet          Node addons-774690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m44s                  kubelet          Node addons-774690 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m43s                  kubelet          Node addons-774690 status is now: NodeReady
	  Normal  RegisteredNode           4m40s                  node-controller  Node addons-774690 event: Registered Node addons-774690 in Controller
	
	
	==> dmesg <==
	[  +3.802472] kauditd_printk_skb: 275 callbacks suppressed
	[  +5.941853] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.404962] kauditd_printk_skb: 11 callbacks suppressed
	[  +8.383101] kauditd_printk_skb: 26 callbacks suppressed
	[  +7.801662] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.035811] kauditd_printk_skb: 56 callbacks suppressed
	[  +3.791918] kauditd_printk_skb: 66 callbacks suppressed
	[Dec 6 09:14] kauditd_printk_skb: 122 callbacks suppressed
	[  +3.685175] kauditd_printk_skb: 120 callbacks suppressed
	[  +0.000035] kauditd_printk_skb: 59 callbacks suppressed
	[  +5.854294] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.586770] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.492396] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.782217] kauditd_printk_skb: 107 callbacks suppressed
	[Dec 6 09:15] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.533976] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.601089] kauditd_printk_skb: 152 callbacks suppressed
	[  +5.748227] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.000031] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.966558] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.096283] kauditd_printk_skb: 25 callbacks suppressed
	[  +1.158400] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.725997] kauditd_printk_skb: 5 callbacks suppressed
	[Dec 6 09:17] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [d60b33a68a97763da81ea2c5b36356d161a454f9fdaedaedda9d6770b3a441c5] <==
	{"level":"info","ts":"2025-12-06T09:14:18.354404Z","caller":"traceutil/trace.go:172","msg":"trace[921660259] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1171; }","duration":"197.744138ms","start":"2025-12-06T09:14:18.156650Z","end":"2025-12-06T09:14:18.354395Z","steps":["trace[921660259] 'agreement among raft nodes before linearized reading'  (duration: 197.330319ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:14:19.606858Z","caller":"traceutil/trace.go:172","msg":"trace[609749521] linearizableReadLoop","detail":"{readStateIndex:1203; appliedIndex:1203; }","duration":"233.521525ms","start":"2025-12-06T09:14:19.373296Z","end":"2025-12-06T09:14:19.606817Z","steps":["trace[609749521] 'read index received'  (duration: 233.515626ms)","trace[609749521] 'applied index is now lower than readState.Index'  (duration: 4.782µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:14:19.607011Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.698844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" limit:1 ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2025-12-06T09:14:19.607029Z","caller":"traceutil/trace.go:172","msg":"trace[1978986472] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1172; }","duration":"233.73196ms","start":"2025-12-06T09:14:19.373292Z","end":"2025-12-06T09:14:19.607024Z","steps":["trace[1978986472] 'agreement among raft nodes before linearized reading'  (duration: 233.624499ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:14:19.607041Z","caller":"traceutil/trace.go:172","msg":"trace[1381779295] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"254.320682ms","start":"2025-12-06T09:14:19.352710Z","end":"2025-12-06T09:14:19.607030Z","steps":["trace[1381779295] 'process raft request'  (duration: 254.234176ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:14:19.607231Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.406511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:14:19.607250Z","caller":"traceutil/trace.go:172","msg":"trace[1179621402] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"180.427816ms","start":"2025-12-06T09:14:19.426817Z","end":"2025-12-06T09:14:19.607245Z","steps":["trace[1179621402] 'agreement among raft nodes before linearized reading'  (duration: 180.381933ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:14:30.296338Z","caller":"traceutil/trace.go:172","msg":"trace[1351520407] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"126.632898ms","start":"2025-12-06T09:14:30.169680Z","end":"2025-12-06T09:14:30.296313Z","steps":["trace[1351520407] 'process raft request'  (duration: 126.451658ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:14:55.948736Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"165.586397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-12-06T09:14:55.948829Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"213.553623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:14:55.948850Z","caller":"traceutil/trace.go:172","msg":"trace[1311282944] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1390; }","duration":"213.569189ms","start":"2025-12-06T09:14:55.735273Z","end":"2025-12-06T09:14:55.948842Z","steps":["trace[1311282944] 'range keys from in-memory index tree'  (duration: 213.495998ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:14:55.948855Z","caller":"traceutil/trace.go:172","msg":"trace[1384824861] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1390; }","duration":"165.726562ms","start":"2025-12-06T09:14:55.783108Z","end":"2025-12-06T09:14:55.948835Z","steps":["trace[1384824861] 'range keys from in-memory index tree'  (duration: 165.52952ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:14:55.948785Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.556436ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaimtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:14:55.948997Z","caller":"traceutil/trace.go:172","msg":"trace[298272656] range","detail":"{range_begin:/registry/resourceclaimtemplates; range_end:; response_count:0; response_revision:1390; }","duration":"135.773447ms","start":"2025-12-06T09:14:55.813217Z","end":"2025-12-06T09:14:55.948991Z","steps":["trace[298272656] 'range keys from in-memory index tree'  (duration: 135.456088ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:14:58.317621Z","caller":"traceutil/trace.go:172","msg":"trace[912941987] linearizableReadLoop","detail":"{readStateIndex:1437; appliedIndex:1437; }","duration":"160.294816ms","start":"2025-12-06T09:14:58.157307Z","end":"2025-12-06T09:14:58.317601Z","steps":["trace[912941987] 'read index received'  (duration: 160.287803ms)","trace[912941987] 'applied index is now lower than readState.Index'  (duration: 6.248µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:14:58.317800Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.485946ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:14:58.317821Z","caller":"traceutil/trace.go:172","msg":"trace[282143720] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1397; }","duration":"160.541611ms","start":"2025-12-06T09:14:58.157274Z","end":"2025-12-06T09:14:58.317815Z","steps":["trace[282143720] 'agreement among raft nodes before linearized reading'  (duration: 160.464695ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:14:58.317813Z","caller":"traceutil/trace.go:172","msg":"trace[782008309] transaction","detail":"{read_only:false; response_revision:1397; number_of_response:1; }","duration":"175.000766ms","start":"2025-12-06T09:14:58.142734Z","end":"2025-12-06T09:14:58.317734Z","steps":["trace[782008309] 'process raft request'  (duration: 174.889306ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:15:23.740170Z","caller":"traceutil/trace.go:172","msg":"trace[23445071] transaction","detail":"{read_only:false; response_revision:1641; number_of_response:1; }","duration":"191.869829ms","start":"2025-12-06T09:15:23.548080Z","end":"2025-12-06T09:15:23.739950Z","steps":["trace[23445071] 'process raft request'  (duration: 190.780885ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:15:33.244235Z","caller":"traceutil/trace.go:172","msg":"trace[267520751] linearizableReadLoop","detail":"{readStateIndex:1747; appliedIndex:1747; }","duration":"153.066742ms","start":"2025-12-06T09:15:33.091152Z","end":"2025-12-06T09:15:33.244219Z","steps":["trace[267520751] 'read index received'  (duration: 153.061629ms)","trace[267520751] 'applied index is now lower than readState.Index'  (duration: 4.349µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T09:15:33.244365Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.208518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:15:33.244396Z","caller":"traceutil/trace.go:172","msg":"trace[1235348292] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1689; }","duration":"153.256537ms","start":"2025-12-06T09:15:33.091134Z","end":"2025-12-06T09:15:33.244391Z","steps":["trace[1235348292] 'agreement among raft nodes before linearized reading'  (duration: 153.183416ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-06T09:15:33.244527Z","caller":"traceutil/trace.go:172","msg":"trace[830658517] transaction","detail":"{read_only:false; response_revision:1690; number_of_response:1; }","duration":"349.777776ms","start":"2025-12-06T09:15:32.894737Z","end":"2025-12-06T09:15:33.244514Z","steps":["trace[830658517] 'process raft request'  (duration: 349.54617ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:15:33.244951Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T09:15:32.894717Z","time spent":"349.849129ms","remote":"127.0.0.1:43552","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1689 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-06T09:16:03.537650Z","caller":"traceutil/trace.go:172","msg":"trace[1245462227] transaction","detail":"{read_only:false; response_revision:1916; number_of_response:1; }","duration":"119.604921ms","start":"2025-12-06T09:16:03.418010Z","end":"2025-12-06T09:16:03.537615Z","steps":["trace[1245462227] 'process raft request'  (duration: 119.496056ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:17:39 up 5 min,  0 users,  load average: 0.77, 1.72, 0.90
	Linux addons-774690 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0cafa29f45be57922934aa2804adc6c03cfd405657efff21b020a18542e39b78] <==
	E1206 09:13:54.731278       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.159.197:443: connect: connection refused" logger="UnhandledError"
	E1206 09:13:54.735509       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.159.197:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.159.197:443: connect: connection refused" logger="UnhandledError"
	I1206 09:13:54.866579       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 09:14:40.112004       1 conn.go:339] Error on socket receive: read tcp 192.168.39.249:8443->192.168.39.1:47208: use of closed network connection
	E1206 09:14:40.307842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.249:8443->192.168.39.1:47240: use of closed network connection
	I1206 09:14:49.697039       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.15.3"}
	I1206 09:15:13.561584       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:15:13.783680       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.104.58"}
	E1206 09:15:20.595753       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1206 09:15:40.392818       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 09:15:54.995815       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:15:54.995867       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:15:55.029570       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:15:55.029620       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:15:55.042773       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:15:55.043098       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:15:55.062398       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:15:55.062494       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:15:55.182796       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:15:55.182842       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:15:55.768108       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	W1206 09:15:56.043377       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 09:15:56.183737       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1206 09:15:56.199883       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1206 09:17:38.151373       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.33.12"}
	
	
	==> kube-controller-manager [897c83e9715cf47989d744826a723d32ae6a225303a4d9621d6cb1b373e84ebb] <==
	E1206 09:16:00.601119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:03.533985       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:03.535075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:03.868373       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:03.869551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:06.537782       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:06.538845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:09.941092       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:09.942265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:10.586246       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:10.587199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:17.429757       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:17.430892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:25.114798       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:25.115987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:32.491786       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:32.493551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:42.247566       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:42.248720       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:16:51.259080       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:16:51.260232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:17:09.086011       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:17:09.087232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:17:21.444673       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:17:21.445841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [14cd309852195096b88efe9d3a347c387db1c0468ac7422480066369d7c24144] <==
	I1206 09:13:02.668513       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:13:02.771288       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:13:02.771380       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.249"]
	E1206 09:13:02.771537       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:13:03.068036       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:13:03.068152       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:13:03.068194       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:13:03.083320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:13:03.085020       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:13:03.085049       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:03.101225       1 config.go:200] "Starting service config controller"
	I1206 09:13:03.101256       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:13:03.101276       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:13:03.101280       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:13:03.101290       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:13:03.101293       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:13:03.102020       1 config.go:309] "Starting node config controller"
	I1206 09:13:03.102046       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:13:03.102053       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:13:03.202341       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:13:03.202423       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:13:03.203473       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2cbfd50881df9ebf5b3ff65c7307461fe5f134037643baf679f5a2991aec5829] <==
	E1206 09:12:52.240388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:12:52.240929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:12:52.241070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:12:52.241235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:12:52.241363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:12:53.036841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:12:53.041394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:12:53.061793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:12:53.083291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:12:53.107931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:12:53.146413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:12:53.206695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:12:53.212550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:12:53.223342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:12:53.388885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:12:53.437116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:12:53.470695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:12:53.505539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:12:53.546058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 09:12:53.603586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:12:53.668946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:12:53.726983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:12:53.778791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:12:53.780020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1206 09:12:56.212824       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:16:05 addons-774690 kubelet[1485]: E1206 09:16:05.731073    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012565730609672 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:05 addons-774690 kubelet[1485]: E1206 09:16:05.731100    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012565730609672 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:15 addons-774690 kubelet[1485]: E1206 09:16:15.734404    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012575733988994 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:15 addons-774690 kubelet[1485]: E1206 09:16:15.734503    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012575733988994 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:20 addons-774690 kubelet[1485]: I1206 09:16:20.345405    1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l9grt" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:16:25 addons-774690 kubelet[1485]: E1206 09:16:25.737858    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012585737258412 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:25 addons-774690 kubelet[1485]: E1206 09:16:25.737886    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012585737258412 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:35 addons-774690 kubelet[1485]: E1206 09:16:35.741764    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012595740823775 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:35 addons-774690 kubelet[1485]: E1206 09:16:35.742120    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012595740823775 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:41 addons-774690 kubelet[1485]: I1206 09:16:41.345807    1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-svq5h" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:16:45 addons-774690 kubelet[1485]: E1206 09:16:45.744806    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012605744093747 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:45 addons-774690 kubelet[1485]: E1206 09:16:45.744836    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012605744093747 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:55 addons-774690 kubelet[1485]: E1206 09:16:55.748424    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012615747979471 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:16:55 addons-774690 kubelet[1485]: E1206 09:16:55.748521    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012615747979471 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:05 addons-774690 kubelet[1485]: E1206 09:17:05.752770    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012625752217517 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:05 addons-774690 kubelet[1485]: E1206 09:17:05.752807    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012625752217517 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:10 addons-774690 kubelet[1485]: I1206 09:17:10.345776    1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:17:15 addons-774690 kubelet[1485]: E1206 09:17:15.756737    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012635755795997 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:15 addons-774690 kubelet[1485]: E1206 09:17:15.756763    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012635755795997 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:25 addons-774690 kubelet[1485]: E1206 09:17:25.760101    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012645759702985 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:25 addons-774690 kubelet[1485]: E1206 09:17:25.760142    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012645759702985 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:27 addons-774690 kubelet[1485]: I1206 09:17:27.345401    1485 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-l9grt" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:17:35 addons-774690 kubelet[1485]: E1206 09:17:35.763139    1485 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765012655762686436 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:35 addons-774690 kubelet[1485]: E1206 09:17:35.763172    1485 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765012655762686436 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585488} inodes_used:{value:192}}"
	Dec 06 09:17:38 addons-774690 kubelet[1485]: I1206 09:17:38.111764    1485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj4d\" (UniqueName: \"kubernetes.io/projected/fc885bae-3988-48eb-958b-64907ecbaeb5-kube-api-access-wdj4d\") pod \"hello-world-app-5d498dc89-twkk5\" (UID: \"fc885bae-3988-48eb-958b-64907ecbaeb5\") " pod="default/hello-world-app-5d498dc89-twkk5"
	
	
	==> storage-provisioner [194a16bb3f558c9e566d6dd9d5d4d7ad1544b1cfd117ed99a25359e961ff291f] <==
	W1206 09:17:13.917473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:15.921339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:15.929224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:17.933510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:17.939527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:19.943936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:19.953945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:21.958046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:21.965502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:23.969620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:23.978424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:25.982287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:25.987383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:27.991585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:27.997313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:30.001494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:30.007296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:32.011221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:32.017303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:34.021849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:34.027062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:36.030122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:36.039173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:38.083520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:17:38.099741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-774690 -n addons-774690
helpers_test.go:269: (dbg) Run:  kubectl --context addons-774690 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-774690 describe pod hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-774690 describe pod hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp: exit status 1 (75.758633ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-twkk5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-774690/192.168.39.249
	Start Time:       Sat, 06 Dec 2025 09:17:38 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wdj4d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wdj4d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-twkk5 to addons-774690
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4c946" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-wjfhp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-774690 describe pod hello-world-app-5d498dc89-twkk5 ingress-nginx-admission-create-4c946 ingress-nginx-admission-patch-wjfhp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable ingress-dns --alsologtostderr -v=1: (1.466195759s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable ingress --alsologtostderr -v=1: (7.781134099s)
--- FAIL: TestAddons/parallel/Ingress (156.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (348.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 09:29:04.546441  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:04.552919  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:04.564371  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:04.585897  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:04.627373  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:04.708865  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:04.870497  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:05.192288  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:05.834477  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:07.116192  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:09.677594  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:14.799395  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:25.041397  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:28.985686  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:45.522886  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:30:26.485940  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:31:48.410284  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-959292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5m46.891930655s)

                                                
                                                
-- stdout --
	* [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-959292" primary control-plane node in "functional-959292" cluster
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-959292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 5m46.89222849s for "functional-959292" cluster.
I1206 09:32:52.188995  396534 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959292 -n functional-959292
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs -n 25: (1.21714935s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-310626 image ls --format yaml --alsologtostderr                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ ssh     │ functional-310626 ssh pgrep buildkitd                                                                                                           │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │                     │
	│ image   │ functional-310626 image ls --format json --alsologtostderr                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr                                          │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image ls --format table --alsologtostderr                                                                                     │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image ls                                                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ delete  │ -p functional-310626                                                                                                                            │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ start   │ -p functional-959292 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:26 UTC │
	│ start   │ -p functional-959292 --alsologtostderr -v=8                                                                                                     │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:latest                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache add minikube-local-cache-test:functional-959292                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache delete minikube-local-cache-test:functional-959292                                                                      │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl images                                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ cache   │ functional-959292 cache reload                                                                                                                  │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ kubectl │ functional-959292 kubectl -- --context functional-959292 get pods                                                                               │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start   │ -p functional-959292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:05.354163  405000 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:05.354265  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354269  405000 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:05.354271  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354498  405000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:27:05.355029  405000 out.go:368] Setting JSON to false
	I1206 09:27:05.356004  405000 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4165,"bootTime":1765009060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:05.356057  405000 start.go:143] virtualization: kvm guest
	I1206 09:27:05.358398  405000 out.go:179] * [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:05.359753  405000 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:05.359781  405000 notify.go:221] Checking for updates...
	I1206 09:27:05.362220  405000 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:05.363383  405000 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:27:05.367950  405000 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:27:05.369254  405000 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:05.370459  405000 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:05.372160  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:05.372262  405000 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:05.406948  405000 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:27:05.408573  405000 start.go:309] selected driver: kvm2
	I1206 09:27:05.408583  405000 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.408701  405000 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:05.409629  405000 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:27:05.409658  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:27:05.409742  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:27:05.409790  405000 start.go:353] cluster config:
	{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.409882  405000 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:27:05.411683  405000 out.go:179] * Starting "functional-959292" primary control-plane node in "functional-959292" cluster
	I1206 09:27:05.413086  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:27:05.413115  405000 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:27:05.413122  405000 cache.go:65] Caching tarball of preloaded images
	I1206 09:27:05.413214  405000 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:27:05.413220  405000 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:27:05.413331  405000 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/config.json ...
	I1206 09:27:05.413537  405000 start.go:360] acquireMachinesLock for functional-959292: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:27:05.413614  405000 start.go:364] duration metric: took 62.698µs to acquireMachinesLock for "functional-959292"
	I1206 09:27:05.413630  405000 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:27:05.413634  405000 fix.go:54] fixHost starting: 
	I1206 09:27:05.415678  405000 fix.go:112] recreateIfNeeded on functional-959292: state=Running err=<nil>
	W1206 09:27:05.415691  405000 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:27:05.417511  405000 out.go:252] * Updating the running kvm2 "functional-959292" VM ...
	I1206 09:27:05.417535  405000 machine.go:94] provisionDockerMachine start ...
	I1206 09:27:05.420691  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421169  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.421191  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421417  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.421668  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.421672  405000 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:27:05.530432  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.530457  405000 buildroot.go:166] provisioning hostname "functional-959292"
	I1206 09:27:05.533437  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.533923  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.533944  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.534145  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.534373  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.534380  405000 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-959292 && echo "functional-959292" | sudo tee /etc/hostname
	I1206 09:27:05.673011  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.676321  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.676815  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.676842  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.677084  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.677310  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.677325  405000 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-959292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-959292/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-959292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:27:05.790461  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:27:05.790486  405000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 09:27:05.790531  405000 buildroot.go:174] setting up certificates
	I1206 09:27:05.790542  405000 provision.go:84] configureAuth start
	I1206 09:27:05.793758  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.794112  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.794125  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.796610  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797015  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.797033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797173  405000 provision.go:143] copyHostCerts
	I1206 09:27:05.797219  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 09:27:05.797225  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 09:27:05.797294  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 09:27:05.797448  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 09:27:05.797454  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 09:27:05.797481  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 09:27:05.797559  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 09:27:05.797562  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 09:27:05.797584  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 09:27:05.797630  405000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.functional-959292 san=[127.0.0.1 192.168.39.122 functional-959292 localhost minikube]
	I1206 09:27:05.927749  405000 provision.go:177] copyRemoteCerts
	I1206 09:27:05.927805  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:27:05.930467  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.930995  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.931017  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.931182  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:06.020293  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:27:06.062999  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:27:06.103800  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:27:06.135603  405000 provision.go:87] duration metric: took 345.046364ms to configureAuth
	I1206 09:27:06.135630  405000 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:27:06.135924  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:06.138757  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139157  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:06.139176  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139330  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:06.139546  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:06.139563  405000 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:27:11.746822  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:27:11.746840  405000 machine.go:97] duration metric: took 6.329297702s to provisionDockerMachine
	I1206 09:27:11.746854  405000 start.go:293] postStartSetup for "functional-959292" (driver="kvm2")
	I1206 09:27:11.746876  405000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:27:11.746961  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:27:11.750570  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751014  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.751033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751196  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:11.837868  405000 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:27:11.843271  405000 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:27:11.843298  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 09:27:11.843387  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 09:27:11.843463  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 09:27:11.843553  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts -> hosts in /etc/test/nested/copy/396534
	I1206 09:27:11.843597  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/396534
	I1206 09:27:11.856490  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:27:11.887680  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts --> /etc/test/nested/copy/396534/hosts (40 bytes)
	I1206 09:27:11.917471  405000 start.go:296] duration metric: took 170.599577ms for postStartSetup
	I1206 09:27:11.917525  405000 fix.go:56] duration metric: took 6.503890577s for fixHost
	I1206 09:27:11.920391  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.920829  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.920843  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.921039  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:11.921236  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:11.921240  405000 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:27:12.029674  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013232.025035703
	
	I1206 09:27:12.029690  405000 fix.go:216] guest clock: 1765013232.025035703
	I1206 09:27:12.029728  405000 fix.go:229] Guest: 2025-12-06 09:27:12.025035703 +0000 UTC Remote: 2025-12-06 09:27:11.917528099 +0000 UTC m=+6.615934527 (delta=107.507604ms)
	I1206 09:27:12.029754  405000 fix.go:200] guest clock delta is within tolerance: 107.507604ms
	I1206 09:27:12.029760  405000 start.go:83] releasing machines lock for "functional-959292", held for 6.616137159s
	I1206 09:27:12.032871  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033367  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.033386  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033972  405000 ssh_runner.go:195] Run: cat /version.json
	I1206 09:27:12.034041  405000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:27:12.037021  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037356  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037372  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037454  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037528  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.037968  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037994  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.038195  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.135127  405000 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:12.178512  405000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:27:12.381531  405000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:27:12.396979  405000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:27:12.397040  405000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:27:12.420072  405000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:27:12.420094  405000 start.go:496] detecting cgroup driver to use...
	I1206 09:27:12.420194  405000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:27:12.466799  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:27:12.509494  405000 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:27:12.509562  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:27:12.561878  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:27:12.598609  405000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:27:12.872841  405000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:27:13.059884  405000 docker.go:234] disabling docker service ...
	I1206 09:27:13.059949  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:27:13.093867  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:27:13.120308  405000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:27:13.320589  405000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:27:13.498865  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:27:13.515293  405000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:27:13.538889  405000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:27:13.538948  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.551961  405000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:27:13.552020  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.565424  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.578556  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.591163  405000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:27:13.605026  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.618520  405000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.632537  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.646329  405000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:27:13.658570  405000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:27:13.670728  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:27:13.846425  405000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:28:44.155984  405000 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.309526087s)
	I1206 09:28:44.156040  405000 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:28:44.156100  405000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:28:44.162119  405000 start.go:564] Will wait 60s for crictl version
	I1206 09:28:44.162184  405000 ssh_runner.go:195] Run: which crictl
	I1206 09:28:44.166332  405000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:28:44.207039  405000 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:28:44.207130  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.238213  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.269956  405000 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1206 09:28:44.274130  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274499  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:28:44.274517  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274693  405000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:28:44.281120  405000 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 09:28:44.282155  405000 kubeadm.go:884] updating cluster {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:28:44.282326  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:28:44.282393  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.325810  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.325822  405000 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:28:44.325876  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.356541  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.356553  405000 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:28:44.356560  405000 kubeadm.go:935] updating node { 192.168.39.122 8441 v1.35.0-beta.0 crio true true} ...
	I1206 09:28:44.356678  405000 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-959292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:28:44.356770  405000 ssh_runner.go:195] Run: crio config
	I1206 09:28:44.403814  405000 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 09:28:44.403837  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:28:44.403854  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:28:44.403866  405000 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:28:44.403896  405000 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-959292 NodeName:functional-959292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:28:44.404049  405000 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-959292"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.122"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:28:44.404129  405000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:28:44.416911  405000 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:28:44.416984  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:28:44.431220  405000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1206 09:28:44.454568  405000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:28:44.475028  405000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1206 09:28:44.495506  405000 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I1206 09:28:44.499849  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:28:44.665494  405000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:28:44.684875  405000 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292 for IP: 192.168.39.122
	I1206 09:28:44.684888  405000 certs.go:195] generating shared ca certs ...
	I1206 09:28:44.684904  405000 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:28:44.685063  405000 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 09:28:44.685107  405000 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 09:28:44.685113  405000 certs.go:257] generating profile certs ...
	I1206 09:28:44.685293  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.key
	I1206 09:28:44.685367  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key.3de1f674
	I1206 09:28:44.685410  405000 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key
	I1206 09:28:44.685527  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 09:28:44.685557  405000 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 09:28:44.685563  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:28:44.685587  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:28:44.685606  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:28:44.685624  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 09:28:44.685662  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:28:44.686407  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:28:44.717857  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:28:44.748141  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:28:44.777905  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:28:44.808483  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:28:44.839184  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:28:44.869544  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:28:44.899600  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:28:44.929911  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 09:28:44.959256  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:28:44.988361  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 09:28:45.017387  405000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:28:45.038256  405000 ssh_runner.go:195] Run: openssl version
	I1206 09:28:45.047367  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.059555  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 09:28:45.071389  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076661  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076758  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.084361  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:28:45.096215  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.108030  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:28:45.119412  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124889  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124968  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.132255  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:28:45.143921  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.155198  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 09:28:45.166926  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172011  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172075  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.179097  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:28:45.190195  405000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:28:45.195680  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:28:45.203086  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:28:45.210171  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:28:45.217010  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:28:45.223948  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:28:45.230923  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:28:45.238258  405000 kubeadm.go:401] StartCluster: {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:28:45.238386  405000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:28:45.238444  405000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:28:45.272278  405000 cri.go:89] found id: "3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db"
	I1206 09:28:45.272295  405000 cri.go:89] found id: "7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799"
	I1206 09:28:45.272300  405000 cri.go:89] found id: "a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab"
	I1206 09:28:45.272304  405000 cri.go:89] found id: "6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6"
	I1206 09:28:45.272307  405000 cri.go:89] found id: "422ce5b897d2b576b825cdca2cb0d613bfe2c99b74fe8984cd5904f6702c11f5"
	I1206 09:28:45.272311  405000 cri.go:89] found id: "f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3"
	I1206 09:28:45.272314  405000 cri.go:89] found id: "db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902"
	I1206 09:28:45.272317  405000 cri.go:89] found id: ""
	I1206 09:28:45.272395  405000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (348.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-959292 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:848: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:False} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.122 PodIP:192.168.39.122 StartTime:2025-12-06 09:28:46 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:<nil> Terminated:0xc0002da150} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:2 Image:registry.k8s.io/kube-scheduler:v1.35.0-beta.0 ImageID:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46 ContainerID:cri-o://db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959292 -n functional-959292
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs -n 25: (1.196242587s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-310626 image ls --format yaml --alsologtostderr                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ ssh     │ functional-310626 ssh pgrep buildkitd                                                                                                           │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │                     │
	│ image   │ functional-310626 image ls --format json --alsologtostderr                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr                                          │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image ls --format table --alsologtostderr                                                                                     │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image ls                                                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ delete  │ -p functional-310626                                                                                                                            │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ start   │ -p functional-959292 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:26 UTC │
	│ start   │ -p functional-959292 --alsologtostderr -v=8                                                                                                     │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:latest                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache add minikube-local-cache-test:functional-959292                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache delete minikube-local-cache-test:functional-959292                                                                      │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl images                                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ cache   │ functional-959292 cache reload                                                                                                                  │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ kubectl │ functional-959292 kubectl -- --context functional-959292 get pods                                                                               │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start   │ -p functional-959292 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:05.354163  405000 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:05.354265  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354269  405000 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:05.354271  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354498  405000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:27:05.355029  405000 out.go:368] Setting JSON to false
	I1206 09:27:05.356004  405000 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4165,"bootTime":1765009060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:05.356057  405000 start.go:143] virtualization: kvm guest
	I1206 09:27:05.358398  405000 out.go:179] * [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:05.359753  405000 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:05.359781  405000 notify.go:221] Checking for updates...
	I1206 09:27:05.362220  405000 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:05.363383  405000 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:27:05.367950  405000 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:27:05.369254  405000 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:05.370459  405000 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:05.372160  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:05.372262  405000 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:05.406948  405000 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:27:05.408573  405000 start.go:309] selected driver: kvm2
	I1206 09:27:05.408583  405000 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.408701  405000 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:05.409629  405000 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:27:05.409658  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:27:05.409742  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:27:05.409790  405000 start.go:353] cluster config:
	{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.409882  405000 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:27:05.411683  405000 out.go:179] * Starting "functional-959292" primary control-plane node in "functional-959292" cluster
	I1206 09:27:05.413086  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:27:05.413115  405000 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:27:05.413122  405000 cache.go:65] Caching tarball of preloaded images
	I1206 09:27:05.413214  405000 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:27:05.413220  405000 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:27:05.413331  405000 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/config.json ...
	I1206 09:27:05.413537  405000 start.go:360] acquireMachinesLock for functional-959292: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:27:05.413614  405000 start.go:364] duration metric: took 62.698µs to acquireMachinesLock for "functional-959292"
	I1206 09:27:05.413630  405000 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:27:05.413634  405000 fix.go:54] fixHost starting: 
	I1206 09:27:05.415678  405000 fix.go:112] recreateIfNeeded on functional-959292: state=Running err=<nil>
	W1206 09:27:05.415691  405000 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:27:05.417511  405000 out.go:252] * Updating the running kvm2 "functional-959292" VM ...
	I1206 09:27:05.417535  405000 machine.go:94] provisionDockerMachine start ...
	I1206 09:27:05.420691  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421169  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.421191  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421417  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.421668  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.421672  405000 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:27:05.530432  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.530457  405000 buildroot.go:166] provisioning hostname "functional-959292"
	I1206 09:27:05.533437  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.533923  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.533944  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.534145  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.534373  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.534380  405000 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-959292 && echo "functional-959292" | sudo tee /etc/hostname
	I1206 09:27:05.673011  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.676321  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.676815  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.676842  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.677084  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.677310  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.677325  405000 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-959292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-959292/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-959292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:27:05.790461  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:27:05.790486  405000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 09:27:05.790531  405000 buildroot.go:174] setting up certificates
	I1206 09:27:05.790542  405000 provision.go:84] configureAuth start
	I1206 09:27:05.793758  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.794112  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.794125  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.796610  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797015  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.797033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797173  405000 provision.go:143] copyHostCerts
	I1206 09:27:05.797219  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 09:27:05.797225  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 09:27:05.797294  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 09:27:05.797448  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 09:27:05.797454  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 09:27:05.797481  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 09:27:05.797559  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 09:27:05.797562  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 09:27:05.797584  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 09:27:05.797630  405000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.functional-959292 san=[127.0.0.1 192.168.39.122 functional-959292 localhost minikube]
	I1206 09:27:05.927749  405000 provision.go:177] copyRemoteCerts
	I1206 09:27:05.927805  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:27:05.930467  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.930995  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.931017  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.931182  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:06.020293  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:27:06.062999  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:27:06.103800  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:27:06.135603  405000 provision.go:87] duration metric: took 345.046364ms to configureAuth
	I1206 09:27:06.135630  405000 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:27:06.135924  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:06.138757  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139157  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:06.139176  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139330  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:06.139546  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:06.139563  405000 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:27:11.746822  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:27:11.746840  405000 machine.go:97] duration metric: took 6.329297702s to provisionDockerMachine
	I1206 09:27:11.746854  405000 start.go:293] postStartSetup for "functional-959292" (driver="kvm2")
	I1206 09:27:11.746876  405000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:27:11.746961  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:27:11.750570  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751014  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.751033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751196  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:11.837868  405000 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:27:11.843271  405000 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:27:11.843298  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 09:27:11.843387  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 09:27:11.843463  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 09:27:11.843553  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts -> hosts in /etc/test/nested/copy/396534
	I1206 09:27:11.843597  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/396534
	I1206 09:27:11.856490  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:27:11.887680  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts --> /etc/test/nested/copy/396534/hosts (40 bytes)
	I1206 09:27:11.917471  405000 start.go:296] duration metric: took 170.599577ms for postStartSetup
	I1206 09:27:11.917525  405000 fix.go:56] duration metric: took 6.503890577s for fixHost
	I1206 09:27:11.920391  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.920829  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.920843  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.921039  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:11.921236  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:11.921240  405000 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:27:12.029674  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013232.025035703
	
	I1206 09:27:12.029690  405000 fix.go:216] guest clock: 1765013232.025035703
	I1206 09:27:12.029728  405000 fix.go:229] Guest: 2025-12-06 09:27:12.025035703 +0000 UTC Remote: 2025-12-06 09:27:11.917528099 +0000 UTC m=+6.615934527 (delta=107.507604ms)
	I1206 09:27:12.029754  405000 fix.go:200] guest clock delta is within tolerance: 107.507604ms
	I1206 09:27:12.029760  405000 start.go:83] releasing machines lock for "functional-959292", held for 6.616137159s
	I1206 09:27:12.032871  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033367  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.033386  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033972  405000 ssh_runner.go:195] Run: cat /version.json
	I1206 09:27:12.034041  405000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:27:12.037021  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037356  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037372  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037454  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037528  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.037968  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037994  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.038195  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.135127  405000 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:12.178512  405000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:27:12.381531  405000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:27:12.396979  405000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:27:12.397040  405000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:27:12.420072  405000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:27:12.420094  405000 start.go:496] detecting cgroup driver to use...
	I1206 09:27:12.420194  405000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:27:12.466799  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:27:12.509494  405000 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:27:12.509562  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:27:12.561878  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:27:12.598609  405000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:27:12.872841  405000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:27:13.059884  405000 docker.go:234] disabling docker service ...
	I1206 09:27:13.059949  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:27:13.093867  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:27:13.120308  405000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:27:13.320589  405000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:27:13.498865  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:27:13.515293  405000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:27:13.538889  405000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:27:13.538948  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.551961  405000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:27:13.552020  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.565424  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.578556  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.591163  405000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:27:13.605026  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.618520  405000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.632537  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.646329  405000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:27:13.658570  405000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:27:13.670728  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:27:13.846425  405000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:28:44.155984  405000 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.309526087s)
	I1206 09:28:44.156040  405000 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:28:44.156100  405000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:28:44.162119  405000 start.go:564] Will wait 60s for crictl version
	I1206 09:28:44.162184  405000 ssh_runner.go:195] Run: which crictl
	I1206 09:28:44.166332  405000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:28:44.207039  405000 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:28:44.207130  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.238213  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.269956  405000 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1206 09:28:44.274130  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274499  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:28:44.274517  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274693  405000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:28:44.281120  405000 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 09:28:44.282155  405000 kubeadm.go:884] updating cluster {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:28:44.282326  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:28:44.282393  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.325810  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.325822  405000 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:28:44.325876  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.356541  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.356553  405000 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:28:44.356560  405000 kubeadm.go:935] updating node { 192.168.39.122 8441 v1.35.0-beta.0 crio true true} ...
	I1206 09:28:44.356678  405000 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-959292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:28:44.356770  405000 ssh_runner.go:195] Run: crio config
	I1206 09:28:44.403814  405000 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 09:28:44.403837  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:28:44.403854  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:28:44.403866  405000 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:28:44.403896  405000 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-959292 NodeName:functional-959292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:28:44.404049  405000 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-959292"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.122"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:28:44.404129  405000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:28:44.416911  405000 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:28:44.416984  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:28:44.431220  405000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1206 09:28:44.454568  405000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:28:44.475028  405000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1206 09:28:44.495506  405000 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I1206 09:28:44.499849  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:28:44.665494  405000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:28:44.684875  405000 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292 for IP: 192.168.39.122
	I1206 09:28:44.684888  405000 certs.go:195] generating shared ca certs ...
	I1206 09:28:44.684904  405000 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:28:44.685063  405000 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 09:28:44.685107  405000 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 09:28:44.685113  405000 certs.go:257] generating profile certs ...
	I1206 09:28:44.685293  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.key
	I1206 09:28:44.685367  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key.3de1f674
	I1206 09:28:44.685410  405000 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key
	I1206 09:28:44.685527  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 09:28:44.685557  405000 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 09:28:44.685563  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:28:44.685587  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:28:44.685606  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:28:44.685624  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 09:28:44.685662  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:28:44.686407  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:28:44.717857  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:28:44.748141  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:28:44.777905  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:28:44.808483  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:28:44.839184  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:28:44.869544  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:28:44.899600  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:28:44.929911  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 09:28:44.959256  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:28:44.988361  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 09:28:45.017387  405000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:28:45.038256  405000 ssh_runner.go:195] Run: openssl version
	I1206 09:28:45.047367  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.059555  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 09:28:45.071389  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076661  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076758  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.084361  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:28:45.096215  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.108030  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:28:45.119412  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124889  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124968  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.132255  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:28:45.143921  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.155198  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 09:28:45.166926  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172011  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172075  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.179097  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:28:45.190195  405000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:28:45.195680  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:28:45.203086  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:28:45.210171  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:28:45.217010  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:28:45.223948  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:28:45.230923  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:28:45.238258  405000 kubeadm.go:401] StartCluster: {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:28:45.238386  405000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:28:45.238444  405000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:28:45.272278  405000 cri.go:89] found id: "3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db"
	I1206 09:28:45.272295  405000 cri.go:89] found id: "7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799"
	I1206 09:28:45.272300  405000 cri.go:89] found id: "a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab"
	I1206 09:28:45.272304  405000 cri.go:89] found id: "6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6"
	I1206 09:28:45.272307  405000 cri.go:89] found id: "422ce5b897d2b576b825cdca2cb0d613bfe2c99b74fe8984cd5904f6702c11f5"
	I1206 09:28:45.272311  405000 cri.go:89] found id: "f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3"
	I1206 09:28:45.272314  405000 cri.go:89] found id: "db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902"
	I1206 09:28:45.272317  405000 cri.go:89] found id: ""
	I1206 09:28:45.272395  405000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-959292 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-959292 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-959292 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-959292 --alsologtostderr -v=1] stderr:
I1206 09:43:06.714132  409483 out.go:360] Setting OutFile to fd 1 ...
I1206 09:43:06.714463  409483 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:06.714477  409483 out.go:374] Setting ErrFile to fd 2...
I1206 09:43:06.714484  409483 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:06.714780  409483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:43:06.715052  409483 mustload.go:66] Loading cluster: functional-959292
I1206 09:43:06.715423  409483 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:06.717460  409483 host.go:66] Checking if "functional-959292" exists ...
I1206 09:43:06.717659  409483 api_server.go:166] Checking apiserver status ...
I1206 09:43:06.717731  409483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:43:06.720947  409483 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:06.721434  409483 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:06.721472  409483 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:06.721670  409483 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:06.818509  409483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6993/cgroup
W1206 09:43:06.830113  409483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6993/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:43:06.830179  409483 ssh_runner.go:195] Run: ls
I1206 09:43:06.837811  409483 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8441/healthz ...
I1206 09:43:06.845836  409483 api_server.go:279] https://192.168.39.122:8441/healthz returned 200:
ok
W1206 09:43:06.845902  409483 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1206 09:43:06.846112  409483 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:06.846135  409483 addons.go:70] Setting dashboard=true in profile "functional-959292"
I1206 09:43:06.846149  409483 addons.go:239] Setting addon dashboard=true in "functional-959292"
I1206 09:43:06.846182  409483 host.go:66] Checking if "functional-959292" exists ...
I1206 09:43:06.850150  409483 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1206 09:43:06.851815  409483 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1206 09:43:06.853148  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1206 09:43:06.853169  409483 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1206 09:43:06.856110  409483 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:06.856656  409483 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:06.856685  409483 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:06.856893  409483 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:06.985446  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1206 09:43:06.985474  409483 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1206 09:43:07.025602  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1206 09:43:07.025629  409483 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1206 09:43:07.056337  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1206 09:43:07.056368  409483 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1206 09:43:07.123843  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1206 09:43:07.123866  409483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1206 09:43:07.163499  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1206 09:43:07.163526  409483 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1206 09:43:07.195102  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1206 09:43:07.195128  409483 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1206 09:43:07.229822  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1206 09:43:07.229843  409483 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1206 09:43:07.256829  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1206 09:43:07.256863  409483 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1206 09:43:07.281417  409483 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:43:07.281444  409483 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1206 09:43:07.307255  409483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:43:08.272344  409483 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-959292 addons enable metrics-server

                                                
                                                
I1206 09:43:08.273653  409483 addons.go:202] Writing out "functional-959292" config to set dashboard=true...
W1206 09:43:08.273955  409483 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1206 09:43:08.274748  409483 kapi.go:59] client config for functional-959292: &rest.Config{Host:"https://192.168.39.122:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.key", CAFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1206 09:43:08.275400  409483 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1206 09:43:08.275421  409483 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1206 09:43:08.275428  409483 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1206 09:43:08.275432  409483 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1206 09:43:08.275436  409483 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1206 09:43:08.285776  409483 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  3ea6158d-183f-45cb-a78e-503dfff0798c 1589 0 2025-12-06 09:43:08 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-06 09:43:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.116.151,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.116.151],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1206 09:43:08.285948  409483 out.go:285] * Launching proxy ...
* Launching proxy ...
I1206 09:43:08.286022  409483 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-959292 proxy --port 36195]
I1206 09:43:08.286498  409483 dashboard.go:159] Waiting for kubectl to output host:port ...
I1206 09:43:08.333349  409483 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1206 09:43:08.333427  409483 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1206 09:43:08.344119  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bae8e9f-5778-407a-b3d4-42cc4b92ca4b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001674200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2000 TLS:<nil>}
I1206 09:43:08.344217  409483 retry.go:31] will retry after 112.466µs: Temporary Error: unexpected response code: 503
I1206 09:43:08.348147  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e171ea65-717f-4bd7-8b20-b759314d8c32] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0016742c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2140 TLS:<nil>}
I1206 09:43:08.348214  409483 retry.go:31] will retry after 84.839µs: Temporary Error: unexpected response code: 503
I1206 09:43:08.351899  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[692024ff-c38b-4cbb-a166-39bf028bb852] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001700140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2280 TLS:<nil>}
I1206 09:43:08.351967  409483 retry.go:31] will retry after 249.883µs: Temporary Error: unexpected response code: 503
I1206 09:43:08.355108  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4b15ad39-49bc-4c0d-862b-659acc8f4b1a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0016743c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034a000 TLS:<nil>}
I1206 09:43:08.355148  409483 retry.go:31] will retry after 503.899µs: Temporary Error: unexpected response code: 503
I1206 09:43:08.358775  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[05a3a4f2-edd0-43f7-997c-61b2eed1121b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001700280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c23c0 TLS:<nil>}
I1206 09:43:08.358815  409483 retry.go:31] will retry after 375.804µs: Temporary Error: unexpected response code: 503
I1206 09:43:08.362315  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bef3f99f-0864-443d-8eef-532c81d6c60c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0016744c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034a140 TLS:<nil>}
I1206 09:43:08.362369  409483 retry.go:31] will retry after 631.598µs: Temporary Error: unexpected response code: 503
I1206 09:43:08.366302  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[132209c0-4f0b-4408-9fc5-f397aeed6227] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0008ef4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2500 TLS:<nil>}
I1206 09:43:08.366346  409483 retry.go:31] will retry after 1.503374ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.370912  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8ff7c552-ebfc-4329-bc17-da9cfc9451e5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0017003c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002088c0 TLS:<nil>}
I1206 09:43:08.370954  409483 retry.go:31] will retry after 1.729893ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.375769  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[02d77354-65a8-4527-a861-31162cf72066] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001674580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034a3c0 TLS:<nil>}
I1206 09:43:08.375840  409483 retry.go:31] will retry after 3.440984ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.382547  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9ddb66f3-50e9-43c8-9d3b-b1ff4f380531] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0017004c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2780 TLS:<nil>}
I1206 09:43:08.382599  409483 retry.go:31] will retry after 4.767497ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.390266  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2da704d-ee23-4f94-81fb-2be63566e7c6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001674680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034a500 TLS:<nil>}
I1206 09:43:08.390310  409483 retry.go:31] will retry after 4.455404ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.398240  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a55f4bd8-b59f-4078-8fc0-d0150bef6514] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0008ef600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c28c0 TLS:<nil>}
I1206 09:43:08.398306  409483 retry.go:31] will retry after 10.629519ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.412658  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ea15894-db7b-4118-9d8c-f624cbad20d7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001700600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208a00 TLS:<nil>}
I1206 09:43:08.412739  409483 retry.go:31] will retry after 13.843581ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.431209  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1607c1e5-3ff1-41a5-8c34-eaeb6e492668] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001674740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034a780 TLS:<nil>}
I1206 09:43:08.431276  409483 retry.go:31] will retry after 21.554244ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.457783  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[db6be213-7e23-4169-a331-3a2e0bacfaa6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0008ef780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2a00 TLS:<nil>}
I1206 09:43:08.457840  409483 retry.go:31] will retry after 29.204097ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.491758  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a29d2ad0-eb67-4519-8608-2041b438a3db] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001674840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208b40 TLS:<nil>}
I1206 09:43:08.491835  409483 retry.go:31] will retry after 30.511286ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.527413  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[05f3cf7f-04ea-41d2-abea-634d7f241947] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0008ef900 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2b40 TLS:<nil>}
I1206 09:43:08.527483  409483 retry.go:31] will retry after 91.049087ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.622756  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b17ccf9c-c578-4095-baab-c328b1676659] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0017006c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208c80 TLS:<nil>}
I1206 09:43:08.622821  409483 retry.go:31] will retry after 57.69379ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.684434  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9e27a262-8d13-4e24-b6c3-85cfd1cec613] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc001674980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034a8c0 TLS:<nil>}
I1206 09:43:08.684499  409483 retry.go:31] will retry after 171.831516ms: Temporary Error: unexpected response code: 503
I1206 09:43:08.860778  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc1844cb-7bff-472d-9e3e-c705316e5026] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:08 GMT]] Body:0xc0016c0040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2c80 TLS:<nil>}
I1206 09:43:08.860847  409483 retry.go:31] will retry after 307.736909ms: Temporary Error: unexpected response code: 503
I1206 09:43:09.172817  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8c20abb5-1eec-4ec5-93cb-7990b04bf750] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:09 GMT]] Body:0xc001700780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208dc0 TLS:<nil>}
I1206 09:43:09.172886  409483 retry.go:31] will retry after 257.005619ms: Temporary Error: unexpected response code: 503
I1206 09:43:09.433621  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf88aca2-6be5-4f99-8148-b01965ae47e6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:09 GMT]] Body:0xc0016c0140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034aa00 TLS:<nil>}
I1206 09:43:09.433688  409483 retry.go:31] will retry after 593.766956ms: Temporary Error: unexpected response code: 503
I1206 09:43:10.031889  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[05af2d9e-9b63-4227-86da-7b431cb3a754] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:10 GMT]] Body:0xc001674a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000208f00 TLS:<nil>}
I1206 09:43:10.031971  409483 retry.go:31] will retry after 540.584118ms: Temporary Error: unexpected response code: 503
I1206 09:43:10.575927  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6936f9a4-6894-4ac4-9dab-54fcc3b64bf8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:10 GMT]] Body:0xc0016c0240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2dc0 TLS:<nil>}
I1206 09:43:10.575998  409483 retry.go:31] will retry after 874.094857ms: Temporary Error: unexpected response code: 503
I1206 09:43:11.454283  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1192a90-ffaf-4b28-be77-bfd900742316] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:11 GMT]] Body:0xc001674bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1206 09:43:11.454357  409483 retry.go:31] will retry after 2.250227404s: Temporary Error: unexpected response code: 503
I1206 09:43:13.710137  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0aded61d-4256-4892-a2e9-e1c782e5cf0a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:13 GMT]] Body:0xc0016c0300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c2f00 TLS:<nil>}
I1206 09:43:13.710205  409483 retry.go:31] will retry after 1.299004429s: Temporary Error: unexpected response code: 503
I1206 09:43:15.012939  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ab9d42a5-acfb-4b8d-afdf-407766cadc5f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:14 GMT]] Body:0xc001700880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002097c0 TLS:<nil>}
I1206 09:43:15.013027  409483 retry.go:31] will retry after 2.974187361s: Temporary Error: unexpected response code: 503
I1206 09:43:17.990767  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce748bfe-a00a-46e4-a912-98ad59329c3f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:17 GMT]] Body:0xc001674cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034ab40 TLS:<nil>}
I1206 09:43:17.990839  409483 retry.go:31] will retry after 5.461014937s: Temporary Error: unexpected response code: 503
I1206 09:43:23.457295  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[474eed40-6ec2-44c3-ab3d-2553cfb7a891] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:23 GMT]] Body:0xc0016c0400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034ac80 TLS:<nil>}
I1206 09:43:23.457366  409483 retry.go:31] will retry after 10.781120866s: Temporary Error: unexpected response code: 503
I1206 09:43:34.244298  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95945d4c-5a76-4390-87f7-012ef57e56c4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:34 GMT]] Body:0xc001674d40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c3040 TLS:<nil>}
I1206 09:43:34.244368  409483 retry.go:31] will retry after 7.442274494s: Temporary Error: unexpected response code: 503
I1206 09:43:41.692973  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[93d20db1-410e-4ca2-b122-097a77dfb36b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:41 GMT]] Body:0xc001674e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c3180 TLS:<nil>}
I1206 09:43:41.693078  409483 retry.go:31] will retry after 16.034335685s: Temporary Error: unexpected response code: 503
I1206 09:43:57.730917  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8a8af71e-1722-4a92-a353-ce6f3feacff6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:43:57 GMT]] Body:0xc0016c0500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c32c0 TLS:<nil>}
I1206 09:43:57.730986  409483 retry.go:31] will retry after 21.847374295s: Temporary Error: unexpected response code: 503
I1206 09:44:19.582611  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f563e6cd-59be-49d5-86ce-ee8deb5bcc58] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:44:19 GMT]] Body:0xc001700a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1206 09:44:19.582730  409483 retry.go:31] will retry after 52.53823667s: Temporary Error: unexpected response code: 503
I1206 09:45:12.126514  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5efe10fb-8ebd-4915-8f3c-e23c6b409c4c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:45:12 GMT]] Body:0xc001674080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002c3400 TLS:<nil>}
I1206 09:45:12.126609  409483 retry.go:31] will retry after 1m27.444786732s: Temporary Error: unexpected response code: 503
I1206 09:46:39.575939  409483 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bcaefc3a-7461-4917-8d1d-6c1926ba5d8b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:46:39 GMT]] Body:0xc00093c300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00034adc0 TLS:<nil>}
I1206 09:46:39.576042  409483 retry.go:31] will retry after 1m27.262470545s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959292 -n functional-959292
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs -n 25: (1.229235306s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ functional-310626 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                           │ functional-310626    │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ start   │ -p functional-310626 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                                             │ functional-310626    │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │                     │
	│ start   │ --download-only -p binary-mirror-961783 --alsologtostderr --binary-mirror http://127.0.0.1:35409 --driver=kvm2  --container-runtime=crio            │ binary-mirror-961783 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	│ ssh     │ functional-310626 ssh sudo cat /etc/ssl/certs/3965342.pem                                                                                           │ functional-310626    │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ start   │ -p functional-310626 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                       │ functional-310626    │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │                     │
	│ ssh     │ functional-959292 ssh pgrep buildkitd                                                                                                               │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ image   │ functional-959292 image ls --format json --alsologtostderr                                                                                          │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ image   │ functional-959292 image build -t localhost/my-image:functional-959292 testdata/build --alsologtostderr                                              │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ image   │ functional-959292 image ls --format table --alsologtostderr                                                                                         │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ ssh     │ functional-959292 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ ssh     │ functional-959292 ssh sudo umount -f /mount-9p                                                                                                      │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ mount   │ -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3668097071/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ ssh     │ functional-959292 ssh -- ls -la /mount-9p                                                                                                           │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ ssh     │ functional-959292 ssh sudo umount -f /mount-9p                                                                                                      │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ image   │ functional-959292 image ls                                                                                                                          │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ ssh     │ functional-959292 ssh findmnt -T /mount1                                                                                                            │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ mount   │ -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount1 --alsologtostderr -v=1                │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ mount   │ -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount2 --alsologtostderr -v=1                │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ mount   │ -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount3 --alsologtostderr -v=1                │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount1                                                                                                            │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ ssh     │ functional-959292 ssh findmnt -T /mount2                                                                                                            │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ ssh     │ functional-959292 ssh findmnt -T /mount3                                                                                                            │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ mount   │ -p functional-959292 --kill=true                                                                                                                    │ functional-959292    │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:43:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:43:05.307895  409384 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:43:05.308005  409384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:43:05.308012  409384 out.go:374] Setting ErrFile to fd 2...
	I1206 09:43:05.308017  409384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:43:05.308294  409384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:43:05.308746  409384 out.go:368] Setting JSON to false
	I1206 09:43:05.309662  409384 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5125,"bootTime":1765009060,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:43:05.309739  409384 start.go:143] virtualization: kvm guest
	I1206 09:43:05.311877  409384 out.go:179] * [functional-959292] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:43:05.313466  409384 notify.go:221] Checking for updates...
	I1206 09:43:05.313471  409384 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:43:05.315144  409384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:43:05.316606  409384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:43:05.318072  409384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:43:05.319519  409384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:43:05.321149  409384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:43:05.323362  409384 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:43:05.324142  409384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:43:05.358478  409384 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:43:05.359876  409384 start.go:309] selected driver: kvm2
	I1206 09:43:05.359891  409384 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:43:05.360015  409384 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:43:05.362081  409384 out.go:203] 
	W1206 09:43:05.363570  409384 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:43:05.364961  409384 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.414114561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c652f0d4-272a-4ef4-8c20-ff6519f865bd name=/runtime.v1.RuntimeService/Version
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.416100946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7be912d0-af1a-456b-9e78-997fe6a3b8a4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.417094527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765014487417066197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189832,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7be912d0-af1a-456b-9e78-997fe6a3b8a4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.417998792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b906003-1ee1-487e-8793-1926a5fd4baa name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.418186009Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b906003-1ee1-487e-8793-1926a5fd4baa name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.418404261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61db9b9e577be536478d9c76214cbfa331b5ec5f2d9d6abeb4685a3d848cf0cf,PodSandboxId:36d360a27ea2624aab536a505fbef10b1768c4c4d32cc9d4e52d1b9b1667bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765013330033288318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903826177716cadc434805610d3d7c97fe34f687c0d629f80b48d3e35dd4bd0,PodSandboxId:ad568adf035da611b0fa89a9d2bc1a52f712f06e5460d98faa7dc4666c324f60,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013330015802835,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a1529a7dabf16b5176f97d322952841f283a2ac2ed4808ee218c3493ae85ce,PodSandboxId:2b8a4542e834ef8074dbdce29b6816d3876fa521180369308b9af56d082c5ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765013327504506793,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb55ad11e86af137056bb1aed088676,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33b15bd8e4ea8efd129d7f69f68848c4911892ad6df356ef1d337d19423cb0,PodSandboxId:69a742c0293e8e91987e692abf28ce2c07acbc015658deca95e5a0620eb8aa08,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187
b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013327302321627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd89816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614abeb9635691120e76fd2a26d5f727066b0f24733338105a96ea7a06fbb39e,PodSandboxId:8fb93c3e9f5efeb5842b92e866dea0e834cab6aecf5153ea0acc264cd3135fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765013327189990856,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db,PodSandboxId:b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd,Metadata:&ContainerMetadata{Name:kube-proxy,A
ttempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765013205091171744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933aff26-6648-4c6e-98ba-105e57654258,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799,PodSandboxId:40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5
e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765013205089347538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab,PodSandboxId:34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765013205067705975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6,PodSandboxId:0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765013201458369212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902,PodSandboxId:2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765013201408585957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ea4c3140d2062a2f1c0348e7497fdc,},Annotations:map[string]string{io.kubernetes.container.hash: bf36923
1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3,PodSandboxId:90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013201413537220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd8
9816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b906003-1ee1-487e-8793-1926a5fd4baa name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.442008881Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=96923f34-3ce8-4413-bef7-da67236d19b2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.442241910Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2b8a4542e834ef8074dbdce29b6816d3876fa521180369308b9af56d082c5ca1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-959292,Uid:7eb55ad11e86af137056bb1aed088676,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765013327339289791,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb55ad11e86af137056bb1aed088676,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.122:8441,kubernetes.io/config.hash: 7eb55ad11e86af137056bb1aed088676,kubernetes.io/config.seen: 2025-12-06T09:28:46.686953275Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:36d360a27ea2624aab536a505fbef
10b1768c4c4d32cc9d4e52d1b9b1667bff7,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-k7gd7,Uid:82a110e1-7845-4a7d-b9a5-3ec24b78bc56,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765013325544996911,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:26:44.743620929Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8fb93c3e9f5efeb5842b92e866dea0e834cab6aecf5153ea0acc264cd3135fda,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-959292,Uid:e125a5aac66a014d03c2145ada7df16e,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765013325416481743,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-cont
roller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e125a5aac66a014d03c2145ada7df16e,kubernetes.io/config.seen: 2025-12-06T09:26:40.746029429Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69a742c0293e8e91987e692abf28ce2c07acbc015658deca95e5a0620eb8aa08,Metadata:&PodSandboxMetadata{Name:etcd-functional-959292,Uid:29eb7cb919df1fb37056cd89816b4994,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765013325410350003,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd89816b4994,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.122:2379,kubernetes.io/config.hash: 29eb7cb919df1fb37056cd89816b4994,kub
ernetes.io/config.seen: 2025-12-06T09:26:40.746027162Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad568adf035da611b0fa89a9d2bc1a52f712f06e5460d98faa7dc4666c324f60,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7c30f196-67d2-42c3-bce2-de37e892b354,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765013325393798531,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/s
torage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-06T09:26:44.743619685Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-k7gd7,Uid:82a110e1-7845-4a7d-b9a5-3ec24b78bc56,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765013182094462634,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2025-12-06T09:25:28.429587812Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd,Metadata:&PodSandboxMetadata{Name:kube-proxy-m9bdx,Uid:933aff26-6648-4c6e-98ba-105e57654258,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765013181757865358,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-m9bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933aff26-6648-4c6e-98ba-105e57654258,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-06T09:25:28.337718968Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-959292,Uid:e125a5aac66a014d03c2145ada7df16e,Namespace:kube-sy
stem,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765013181750052960,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e125a5aac66a014d03c2145ada7df16e,kubernetes.io/config.seen: 2025-12-06T09:25:23.184140789Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f,Metadata:&PodSandboxMetadata{Name:etcd-functional-959292,Uid:29eb7cb919df1fb37056cd89816b4994,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765013181665141253,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd8981
6b4994,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.122:2379,kubernetes.io/config.hash: 29eb7cb919df1fb37056cd89816b4994,kubernetes.io/config.seen: 2025-12-06T09:25:23.184128924Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-959292,Uid:29ea4c3140d2062a2f1c0348e7497fdc,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765013181628120992,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ea4c3140d2062a2f1c0348e7497fdc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 29ea4c3140d2062a2f1c0348e7497fdc,kubernetes.io/config.seen: 2025-12-06T09:25:23.184141562Z,kubernetes.io/config.source: file,}
,RuntimeHandler:,},&PodSandbox{Id:34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7c30f196-67d2-42c3-bce2-de37e892b354,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765013181617513160,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imageP
ullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-06T09:25:30.224968635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=96923f34-3ce8-4413-bef7-da67236d19b2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.443361620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5af53e0e-178d-4ca0-9175-5e55b146cd78 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.443491347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5af53e0e-178d-4ca0-9175-5e55b146cd78 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.443783331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61db9b9e577be536478d9c76214cbfa331b5ec5f2d9d6abeb4685a3d848cf0cf,PodSandboxId:36d360a27ea2624aab536a505fbef10b1768c4c4d32cc9d4e52d1b9b1667bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765013330033288318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903826177716cadc434805610d3d7c97fe34f687c0d629f80b48d3e35dd4bd0,PodSandboxId:ad568adf035da611b0fa89a9d2bc1a52f712f06e5460d98faa7dc4666c324f60,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013330015802835,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a1529a7dabf16b5176f97d322952841f283a2ac2ed4808ee218c3493ae85ce,PodSandboxId:2b8a4542e834ef8074dbdce29b6816d3876fa521180369308b9af56d082c5ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765013327504506793,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb55ad11e86af137056bb1aed088676,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33b15bd8e4ea8efd129d7f69f68848c4911892ad6df356ef1d337d19423cb0,PodSandboxId:69a742c0293e8e91987e692abf28ce2c07acbc015658deca95e5a0620eb8aa08,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187
b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013327302321627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd89816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614abeb9635691120e76fd2a26d5f727066b0f24733338105a96ea7a06fbb39e,PodSandboxId:8fb93c3e9f5efeb5842b92e866dea0e834cab6aecf5153ea0acc264cd3135fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765013327189990856,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db,PodSandboxId:b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd,Metadata:&ContainerMetadata{Name:kube-proxy,A
ttempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765013205091171744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933aff26-6648-4c6e-98ba-105e57654258,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799,PodSandboxId:40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5
e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765013205089347538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab,PodSandboxId:34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765013205067705975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6,PodSandboxId:0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765013201458369212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902,PodSandboxId:2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765013201408585957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ea4c3140d2062a2f1c0348e7497fdc,},Annotations:map[string]string{io.kubernetes.container.hash: bf36923
1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3,PodSandboxId:90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013201413537220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd8
9816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5af53e0e-178d-4ca0-9175-5e55b146cd78 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.450383086Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27276657-3259-4d9a-a272-dc95edafa299 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.450465166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27276657-3259-4d9a-a272-dc95edafa299 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.452084584Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=368e4ccd-48a5-48ed-a7a9-42a6dfe1373c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.452737519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765014487452716368,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189832,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=368e4ccd-48a5-48ed-a7a9-42a6dfe1373c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.453613598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afe477a6-9dc5-4b73-951d-e605f1dd3260 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.453770276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afe477a6-9dc5-4b73-951d-e605f1dd3260 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.454609901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61db9b9e577be536478d9c76214cbfa331b5ec5f2d9d6abeb4685a3d848cf0cf,PodSandboxId:36d360a27ea2624aab536a505fbef10b1768c4c4d32cc9d4e52d1b9b1667bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765013330033288318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903826177716cadc434805610d3d7c97fe34f687c0d629f80b48d3e35dd4bd0,PodSandboxId:ad568adf035da611b0fa89a9d2bc1a52f712f06e5460d98faa7dc4666c324f60,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013330015802835,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a1529a7dabf16b5176f97d322952841f283a2ac2ed4808ee218c3493ae85ce,PodSandboxId:2b8a4542e834ef8074dbdce29b6816d3876fa521180369308b9af56d082c5ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765013327504506793,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb55ad11e86af137056bb1aed088676,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33b15bd8e4ea8efd129d7f69f68848c4911892ad6df356ef1d337d19423cb0,PodSandboxId:69a742c0293e8e91987e692abf28ce2c07acbc015658deca95e5a0620eb8aa08,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187
b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013327302321627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd89816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614abeb9635691120e76fd2a26d5f727066b0f24733338105a96ea7a06fbb39e,PodSandboxId:8fb93c3e9f5efeb5842b92e866dea0e834cab6aecf5153ea0acc264cd3135fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765013327189990856,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db,PodSandboxId:b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd,Metadata:&ContainerMetadata{Name:kube-proxy,A
ttempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765013205091171744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933aff26-6648-4c6e-98ba-105e57654258,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799,PodSandboxId:40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5
e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765013205089347538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab,PodSandboxId:34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765013205067705975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6,PodSandboxId:0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765013201458369212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902,PodSandboxId:2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765013201408585957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ea4c3140d2062a2f1c0348e7497fdc,},Annotations:map[string]string{io.kubernetes.container.hash: bf36923
1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3,PodSandboxId:90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013201413537220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd8
9816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afe477a6-9dc5-4b73-951d-e605f1dd3260 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.498336418Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd63b251-aea7-4a8f-9944-8ae80e9c1a69 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.498541390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd63b251-aea7-4a8f-9944-8ae80e9c1a69 name=/runtime.v1.RuntimeService/Version
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.500078116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3ca1f71-c76c-4080-b867-349ee5b6d122 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.500789766Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765014487500762452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189832,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3ca1f71-c76c-4080-b867-349ee5b6d122 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.501694731Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8c68814-1e25-424d-9518-309c3199efae name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.501792535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8c68814-1e25-424d-9518-309c3199efae name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 09:48:07 functional-959292 crio[6207]: time="2025-12-06 09:48:07.502031066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:61db9b9e577be536478d9c76214cbfa331b5ec5f2d9d6abeb4685a3d848cf0cf,PodSandboxId:36d360a27ea2624aab536a505fbef10b1768c4c4d32cc9d4e52d1b9b1667bff7,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765013330033288318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d903826177716cadc434805610d3d7c97fe34f687c0d629f80b48d3e35dd4bd0,PodSandboxId:ad568adf035da611b0fa89a9d2bc1a52f712f06e5460d98faa7dc4666c324f60,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765013330015802835,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18a1529a7dabf16b5176f97d322952841f283a2ac2ed4808ee218c3493ae85ce,PodSandboxId:2b8a4542e834ef8074dbdce29b6816d3876fa521180369308b9af56d082c5ca1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765013327504506793,Labels:map[string]str
ing{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eb55ad11e86af137056bb1aed088676,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da33b15bd8e4ea8efd129d7f69f68848c4911892ad6df356ef1d337d19423cb0,PodSandboxId:69a742c0293e8e91987e692abf28ce2c07acbc015658deca95e5a0620eb8aa08,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187
b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765013327302321627,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd89816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:614abeb9635691120e76fd2a26d5f727066b0f24733338105a96ea7a06fbb39e,PodSandboxId:8fb93c3e9f5efeb5842b92e866dea0e834cab6aecf5153ea0acc264cd3135fda,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765013327189990856,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db,PodSandboxId:b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd,Metadata:&ContainerMetadata{Name:kube-proxy,A
ttempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765013205091171744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m9bdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 933aff26-6648-4c6e-98ba-105e57654258,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799,PodSandboxId:40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5
e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765013205089347538,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k7gd7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82a110e1-7845-4a7d-b9a5-3ec24b78bc56,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab,PodSandboxId:34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765013205067705975,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c30f196-67d2-42c3-bce2-de37e892b354,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6,PodSandboxId:0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765013201458369212,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e125a5aac66a014d03c2145ada7df16e,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"proto
col\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902,PodSandboxId:2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765013201408585957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29ea4c3140d2062a2f1c0348e7497fdc,},Annotations:map[string]string{io.kubernetes.container.hash: bf36923
1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3,PodSandboxId:90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765013201413537220,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-959292,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eb7cb919df1fb37056cd8
9816b4994,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8c68814-1e25-424d-9518-309c3199efae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	61db9b9e577be       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   19 minutes ago      Running             coredns                   3                   36d360a27ea26       coredns-7d764666f9-k7gd7                    kube-system
	d903826177716       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 minutes ago      Running             storage-provisioner       3                   ad568adf035da       storage-provisioner                         kube-system
	18a1529a7dabf       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   19 minutes ago      Running             kube-apiserver            0                   2b8a4542e834e       kube-apiserver-functional-959292            kube-system
	da33b15bd8e4e       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 minutes ago      Running             etcd                      3                   69a742c0293e8       etcd-functional-959292                      kube-system
	614abeb963569       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   19 minutes ago      Running             kube-controller-manager   3                   8fb93c3e9f5ef       kube-controller-manager-functional-959292   kube-system
	3cf20b6d798d0       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   21 minutes ago      Exited              kube-proxy                2                   b456d1a645c56       kube-proxy-m9bdx                            kube-system
	7e0e61506a238       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   21 minutes ago      Exited              coredns                   2                   40e66880ad032       coredns-7d764666f9-k7gd7                    kube-system
	a7a0409ceca2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 minutes ago      Exited              storage-provisioner       2                   34bad5b9d4435       storage-provisioner                         kube-system
	6e5074c405f22       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   21 minutes ago      Exited              kube-controller-manager   2                   0884bc5324fca       kube-controller-manager-functional-959292   kube-system
	f007b54f29b7c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   21 minutes ago      Exited              etcd                      2                   90d5149a4e408       etcd-functional-959292                      kube-system
	db52b0948589f       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   21 minutes ago      Exited              kube-scheduler            2                   2ee0078ba926d       kube-scheduler-functional-959292            kube-system
	
	
	==> coredns [61db9b9e577be536478d9c76214cbfa331b5ec5f2d9d6abeb4685a3d848cf0cf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48699 - 9200 "HINFO IN 3011345749533723407.2755808351664552452. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.140925162s
	
	
	==> coredns [7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58989 - 59986 "HINFO IN 982313396825240057.4214260657140433061. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.065929025s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-959292
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-959292
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-959292
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_25_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:25:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-959292
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:48:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:43:26 +0000   Sat, 06 Dec 2025 09:25:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:43:26 +0000   Sat, 06 Dec 2025 09:25:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:43:26 +0000   Sat, 06 Dec 2025 09:25:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:43:26 +0000   Sat, 06 Dec 2025 09:25:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.122
	  Hostname:    functional-959292
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 8850521481bc4986afad15a7829a76d3
	  System UUID:                88505214-81bc-4986-afad-15a7829a76d3
	  Boot ID:                    4ac16c92-c2e1-49f7-9815-3888dec3fe6a
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-k7gd7                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     22m
	  kube-system                 etcd-functional-959292                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         22m
	  kube-system                 kube-apiserver-functional-959292             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-functional-959292    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-m9bdx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-functional-959292             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  22m   node-controller  Node functional-959292 event: Registered Node functional-959292 in Controller
	  Normal  RegisteredNode  21m   node-controller  Node functional-959292 event: Registered Node functional-959292 in Controller
	  Normal  RegisteredNode  21m   node-controller  Node functional-959292 event: Registered Node functional-959292 in Controller
	  Normal  RegisteredNode  19m   node-controller  Node functional-959292 event: Registered Node functional-959292 in Controller
	
	
	==> dmesg <==
	[Dec 6 09:24] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Dec 6 09:25] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002463] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.173535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087148] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.106091] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.138331] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.632654] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.388010] kauditd_printk_skb: 251 callbacks suppressed
	[Dec 6 09:26] kauditd_printk_skb: 45 callbacks suppressed
	[  +3.314750] kauditd_printk_skb: 349 callbacks suppressed
	[  +0.123821] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.444832] kauditd_printk_skb: 127 callbacks suppressed
	[Dec 6 09:27] kauditd_printk_skb: 18 callbacks suppressed
	[Dec 6 09:28] kauditd_printk_skb: 210 callbacks suppressed
	[  +3.656361] kauditd_printk_skb: 217 callbacks suppressed
	[Dec 6 09:29] kauditd_printk_skb: 22 callbacks suppressed
	[Dec 6 09:43] crun[9540]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +7.876653] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [da33b15bd8e4ea8efd129d7f69f68848c4911892ad6df356ef1d337d19423cb0] <==
	{"level":"warn","ts":"2025-12-06T09:28:48.549060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.557684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.566722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.576806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.591322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.598588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.608529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.614591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.625335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.631991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.640223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.668185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.677517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.683500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.696788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.702134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.708720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.717018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:28:48.763497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54526","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:38:48.157115Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1081}
	{"level":"info","ts":"2025-12-06T09:38:48.180606Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1081,"took":"23.201792ms","hash":4151784083,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1224704,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2025-12-06T09:38:48.180849Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4151784083,"revision":1081,"compact-revision":-1}
	{"level":"info","ts":"2025-12-06T09:43:48.164495Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1326}
	{"level":"info","ts":"2025-12-06T09:43:48.168410Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1326,"took":"3.333409ms","hash":3936993725,"current-db-size-bytes":3031040,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-06T09:43:48.168443Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3936993725,"revision":1326,"compact-revision":1081}
	
	
	==> etcd [f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3] <==
	{"level":"warn","ts":"2025-12-06T09:26:42.600427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:42.607539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:42.622998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:42.630229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:42.640454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:42.647703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:42.728060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33550","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:27:06.298206Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T09:27:06.298331Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-959292","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"]}
	{"level":"error","ts":"2025-12-06T09:27:06.298437Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:27:06.375024Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T09:27:06.376572Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:27:06.376625Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:27:06.376742Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:27:06.376757Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T09:27:06.376770Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.122:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T09:27:06.376783Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.122:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T09:27:06.376790Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.122:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:27:06.376777Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"227d76f9723f8d84","current-leader-member-id":"227d76f9723f8d84"}
	{"level":"info","ts":"2025-12-06T09:27:06.377010Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T09:27:06.377026Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T09:27:06.381360Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"error","ts":"2025-12-06T09:27:06.381416Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.122:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T09:27:06.381453Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.122:2380"}
	{"level":"info","ts":"2025-12-06T09:27:06.381461Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-959292","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.122:2380"],"advertise-client-urls":["https://192.168.39.122:2379"]}
	
	
	==> kernel <==
	 09:48:07 up 23 min,  0 users,  load average: 0.06, 0.17, 0.17
	Linux functional-959292 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [18a1529a7dabf16b5176f97d322952841f283a2ac2ed4808ee218c3493ae85ce] <==
	I1206 09:28:49.491607       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:28:49.491704       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:28:49.493449       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:28:49.494113       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:49.495689       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1206 09:28:49.502932       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:28:49.524557       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:28:49.525906       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:28:49.742266       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:28:50.296834       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:28:50.903781       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:28:50.954853       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:28:50.986310       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:28:50.995124       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:28:52.938101       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:28:53.087305       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:28:53.136996       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:32:58.330544       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.137.157"}
	I1206 09:33:02.285404       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.100.255.103"}
	I1206 09:33:02.475830       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.23.142"}
	I1206 09:33:03.503962       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.113.139"}
	I1206 09:38:49.430557       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:43:07.895895       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:43:08.236035       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.116.151"}
	I1206 09:43:08.253338       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.214.197"}
	
	
	==> kube-controller-manager [614abeb9635691120e76fd2a26d5f727066b0f24733338105a96ea7a06fbb39e] <==
	I1206 09:28:52.564454       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.570970       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.571146       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.571183       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.571209       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.571240       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.565382       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.578333       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.578444       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.578528       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:28:52.578706       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-959292"
	I1206 09:28:52.578860       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1206 09:28:52.579054       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.565392       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.646953       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.649322       1 shared_informer.go:377] "Caches are synced"
	I1206 09:28:52.649357       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:28:52.649362       1 garbagecollector.go:169] "Proceeding to collect garbage"
	E1206 09:43:08.033787       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:43:08.043804       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:43:08.050598       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:43:08.053254       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:43:08.102722       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:43:08.102777       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:43:08.120985       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6] <==
	I1206 09:26:46.563542       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:46.563546       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.563626       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.563722       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.565722       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.565823       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.566130       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.566206       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.566319       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.566432       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.566697       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.566808       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:26:46.566910       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-959292"
	I1206 09:26:46.566996       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1206 09:26:46.567099       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.567112       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.567199       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.558896       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.558834       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.571479       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.607884       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.652325       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.657737       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:46.657767       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:26:46.657772       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db] <==
	I1206 09:26:45.552274       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:45.652953       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:45.653772       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.122"]
	E1206 09:26:45.653947       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:26:45.727048       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 09:26:45.727107       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 09:26:45.727129       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:26:45.748314       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:26:45.749023       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:26:45.749051       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:45.756557       1 config.go:200] "Starting service config controller"
	I1206 09:26:45.756596       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:26:45.756616       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:26:45.756619       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:26:45.757160       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:26:45.757191       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:26:45.759625       1 config.go:309] "Starting node config controller"
	I1206 09:26:45.759762       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:26:45.759806       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:26:45.856757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:26:45.856866       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:26:45.857298       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902] <==
	I1206 09:26:42.187606       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:26:43.324949       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:26:43.325012       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:26:43.325023       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:26:43.325029       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:26:43.411952       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:26:43.412393       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:43.415451       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:26:43.415486       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:43.415668       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:26:43.415743       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:26:43.516590       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:06.283101       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 09:27:06.283709       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 09:27:06.283824       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 09:27:06.283855       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 09:47:44 functional-959292 kubelet[6818]: E1206 09:47:44.745905    6818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\" already exists"
	Dec 06 09:47:44 functional-959292 kubelet[6818]: E1206 09:47:44.745955    6818 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\" already exists" pod="kube-system/kube-scheduler-functional-959292"
	Dec 06 09:47:44 functional-959292 kubelet[6818]: E1206 09:47:44.745969    6818 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\" already exists" pod="kube-system/kube-scheduler-functional-959292"
	Dec 06 09:47:44 functional-959292 kubelet[6818]: E1206 09:47:44.746068    6818 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-959292_kube-system(29ea4c3140d2062a2f1c0348e7497fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-959292_kube-system(29ea4c3140d2062a2f1c0348e7497fdc)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-959292" podUID="29ea4c3140d2062a2f1c0348e7497fdc"
	Dec 06 09:47:46 functional-959292 kubelet[6818]: E1206 09:47:46.813015    6818 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod29ea4c3140d2062a2f1c0348e7497fdc/crio-2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0: Error finding container 2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0: Status 404 returned error can't find the container with id 2ee0078ba926d770b02c83b258cc30f46d810cb27969bc2cfc88386b26d392b0
	Dec 06 09:47:46 functional-959292 kubelet[6818]: E1206 09:47:46.813407    6818 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod7c30f196-67d2-42c3-bce2-de37e892b354/crio-34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4: Error finding container 34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4: Status 404 returned error can't find the container with id 34bad5b9d443530d9e6b633925df02fe28053a540ef75662321a5776929dc0a4
	Dec 06 09:47:46 functional-959292 kubelet[6818]: E1206 09:47:46.814188    6818 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod29eb7cb919df1fb37056cd89816b4994/crio-90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f: Error finding container 90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f: Status 404 returned error can't find the container with id 90d5149a4e408efeab16e48be73636d8e47e6e7aa5474e54cb84547d1352253f
	Dec 06 09:47:46 functional-959292 kubelet[6818]: E1206 09:47:46.814447    6818 manager.go:1119] Failed to create existing container: /kubepods/burstable/pode125a5aac66a014d03c2145ada7df16e/crio-0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249: Error finding container 0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249: Status 404 returned error can't find the container with id 0884bc5324fcae95ae4135cb5b22beee3d610f5cb1817151d06714a43ca64249
	Dec 06 09:47:46 functional-959292 kubelet[6818]: E1206 09:47:46.814775    6818 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod933aff26-6648-4c6e-98ba-105e57654258/crio-b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd: Error finding container b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd: Status 404 returned error can't find the container with id b456d1a645c5655b6130602271669cb1ad2f4ed475379c89d824dcd73f5af0bd
	Dec 06 09:47:46 functional-959292 kubelet[6818]: E1206 09:47:46.815100    6818 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod82a110e1-7845-4a7d-b9a5-3ec24b78bc56/crio-40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191: Error finding container 40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191: Status 404 returned error can't find the container with id 40e66880ad0325c1af4d727469ee7716b46e4bab593d4c6f8b6616d955b46191
	Dec 06 09:47:47 functional-959292 kubelet[6818]: E1206 09:47:47.114782    6818 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765014467114331831  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189832}  inodes_used:{value:89}}"
	Dec 06 09:47:47 functional-959292 kubelet[6818]: E1206 09:47:47.115038    6818 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765014467114331831  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189832}  inodes_used:{value:89}}"
	Dec 06 09:47:56 functional-959292 kubelet[6818]: E1206 09:47:56.744958    6818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-m9bdx_kube-system_933aff26-6648-4c6e-98ba-105e57654258_2\" already exists"
	Dec 06 09:47:56 functional-959292 kubelet[6818]: E1206 09:47:56.745005    6818 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-m9bdx_kube-system_933aff26-6648-4c6e-98ba-105e57654258_2\" already exists" pod="kube-system/kube-proxy-m9bdx"
	Dec 06 09:47:56 functional-959292 kubelet[6818]: E1206 09:47:56.745020    6818 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-m9bdx_kube-system_933aff26-6648-4c6e-98ba-105e57654258_2\" already exists" pod="kube-system/kube-proxy-m9bdx"
	Dec 06 09:47:56 functional-959292 kubelet[6818]: E1206 09:47:56.745064    6818 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-m9bdx_kube-system(933aff26-6648-4c6e-98ba-105e57654258)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-m9bdx_kube-system(933aff26-6648-4c6e-98ba-105e57654258)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-m9bdx_kube-system_933aff26-6648-4c6e-98ba-105e57654258_2\\\" already exists\"" pod="kube-system/kube-proxy-m9bdx" podUID="933aff26-6648-4c6e-98ba-105e57654258"
	Dec 06 09:47:57 functional-959292 kubelet[6818]: E1206 09:47:57.117358    6818 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765014477116489590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189832}  inodes_used:{value:89}}"
	Dec 06 09:47:57 functional-959292 kubelet[6818]: E1206 09:47:57.117415    6818 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765014477116489590  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189832}  inodes_used:{value:89}}"
	Dec 06 09:47:58 functional-959292 kubelet[6818]: E1206 09:47:58.732026    6818 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-959292" containerName="kube-scheduler"
	Dec 06 09:47:58 functional-959292 kubelet[6818]: E1206 09:47:58.742033    6818 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\" already exists"
	Dec 06 09:47:58 functional-959292 kubelet[6818]: E1206 09:47:58.742076    6818 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\" already exists" pod="kube-system/kube-scheduler-functional-959292"
	Dec 06 09:47:58 functional-959292 kubelet[6818]: E1206 09:47:58.742137    6818 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\" already exists" pod="kube-system/kube-scheduler-functional-959292"
	Dec 06 09:47:58 functional-959292 kubelet[6818]: E1206 09:47:58.742216    6818 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-959292_kube-system(29ea4c3140d2062a2f1c0348e7497fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-959292_kube-system(29ea4c3140d2062a2f1c0348e7497fdc)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-959292_kube-system_29ea4c3140d2062a2f1c0348e7497fdc_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-959292" podUID="29ea4c3140d2062a2f1c0348e7497fdc"
	Dec 06 09:48:07 functional-959292 kubelet[6818]: E1206 09:48:07.119811    6818 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765014487119436798  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189832}  inodes_used:{value:89}}"
	Dec 06 09:48:07 functional-959292 kubelet[6818]: E1206 09:48:07.119846    6818 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765014487119436798  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189832}  inodes_used:{value:89}}"
	
	
	==> storage-provisioner [a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab] <==
	I1206 09:26:45.256260       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:26:45.290425       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:26:45.290738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:26:45.299269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.758333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:53.018709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:56.617508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:59.671830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:02.694739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:02.705702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:27:02.706184       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:27:02.706351       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-959292_341c615a-7444-42c8-962e-1cbe407772a1!
	I1206 09:27:02.707335       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6d6b81a-742f-4c63-9791-61712a4d492f", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-959292_341c615a-7444-42c8-962e-1cbe407772a1 became leader
	W1206 09:27:02.714799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:02.723598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:27:02.808255       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-959292_341c615a-7444-42c8-962e-1cbe407772a1!
	W1206 09:27:04.727502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:04.739169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d903826177716cadc434805610d3d7c97fe34f687c0d629f80b48d3e35dd4bd0] <==
	W1206 09:47:43.544597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:45.547319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:45.553770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:47.557896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:47.563932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:49.569080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:49.579384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:51.583241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:51.589098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:53.592701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:53.598233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:55.601919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:55.606728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:57.610815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:57.621627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:59.625051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:47:59.632488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:01.636727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:01.646442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:03.650401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:03.656629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:05.660821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:05.666447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:07.671216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:48:07.683685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod dashboard-metrics-scraper-5565989548-sql5g kubernetes-dashboard-b84665fb8-b4xgf
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod dashboard-metrics-scraper-5565989548-sql5g kubernetes-dashboard-b84665fb8-b4xgf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod dashboard-metrics-scraper-5565989548-sql5g kubernetes-dashboard-b84665fb8-b4xgf: exit status 1 (103.382178ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b968x (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b968x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-xzgh7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxsl7 (ro)
	Volumes:
	  kube-api-access-hxsl7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-bbj44
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5nkl (ro)
	Volumes:
	  kube-api-access-k5nkl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-844cf969f6-f88x4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bd5 (ro)
	Volumes:
	  kube-api-access-49bd5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtzk (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6dtzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-sql5g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-b4xgf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod dashboard-metrics-scraper-5565989548-sql5g kubernetes-dashboard-b84665fb8-b4xgf: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-959292 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-959292 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-bbj44" [d409aa1c-6a32-4cdc-9260-d5ad2568f95d] Pending
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-06 09:43:03.787824849 +0000 UTC m=+1887.197066747
functional_test.go:1645: (dbg) Run:  kubectl --context functional-959292 describe po hello-node-connect-9f67c86d4-bbj44 -n default
functional_test.go:1645: (dbg) kubectl --context functional-959292 describe po hello-node-connect-9f67c86d4-bbj44 -n default:
Name:             hello-node-connect-9f67c86d4-bbj44
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5nkl (ro)
Volumes:
kube-api-access-k5nkl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1645: (dbg) Run:  kubectl --context functional-959292 logs hello-node-connect-9f67c86d4-bbj44 -n default
functional_test.go:1645: (dbg) kubectl --context functional-959292 logs hello-node-connect-9f67c86d4-bbj44 -n default:
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-959292 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-bbj44
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5nkl (ro)
Volumes:
kube-api-access-k5nkl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-959292 logs -l app=hello-node-connect
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-959292 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.113.139
IPs:                      10.101.113.139
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32469/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959292 -n functional-959292
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs -n 25: (1.615317378s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-959292 --alsologtostderr -v=8                                                                                            │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.1                                                                                  │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.3                                                                                  │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:latest                                                                               │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache add minikube-local-cache-test:functional-959292                                                                │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache delete minikube-local-cache-test:functional-959292                                                             │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ list                                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh -n functional-959292 sudo cat /tmp/does/not/exist/cp-test.txt                                                    │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                               │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh echo hello                                                                                                       │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh cat /etc/hostname                                                                                                │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ addons  │ functional-959292 addons list                                                                                                          │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ addons  │ functional-959292 addons list -o json                                                                                                  │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ license │                                                                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ mount   │ -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001:/mount-9p --alsologtostderr -v=1 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount-9p | grep 9p                                                                                   │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount-9p | grep 9p                                                                                   │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ ssh     │ functional-959292 ssh -- ls -la /mount-9p                                                                                              │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ ssh     │ functional-959292 ssh cat /mount-9p/test-1765013952684701104                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ service │ functional-959292 service list                                                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ service │ functional-959292 service list -o json                                                                                                 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │ 06 Dec 25 09:43 UTC │
	│ service │ functional-959292 service --namespace=default --https --url hello-node                                                                 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ service │ functional-959292 service hello-node --url --format={{.IP}}                                                                            │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	│ service │ functional-959292 service hello-node --url                                                                                             │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:05.354163  405000 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:05.354265  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354269  405000 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:05.354271  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354498  405000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:27:05.355029  405000 out.go:368] Setting JSON to false
	I1206 09:27:05.356004  405000 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4165,"bootTime":1765009060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:05.356057  405000 start.go:143] virtualization: kvm guest
	I1206 09:27:05.358398  405000 out.go:179] * [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:05.359753  405000 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:05.359781  405000 notify.go:221] Checking for updates...
	I1206 09:27:05.362220  405000 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:05.363383  405000 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:27:05.367950  405000 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:27:05.369254  405000 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:05.370459  405000 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:05.372160  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:05.372262  405000 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:05.406948  405000 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:27:05.408573  405000 start.go:309] selected driver: kvm2
	I1206 09:27:05.408583  405000 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.408701  405000 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:05.409629  405000 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:27:05.409658  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:27:05.409742  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:27:05.409790  405000 start.go:353] cluster config:
	{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.409882  405000 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:27:05.411683  405000 out.go:179] * Starting "functional-959292" primary control-plane node in "functional-959292" cluster
	I1206 09:27:05.413086  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:27:05.413115  405000 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:27:05.413122  405000 cache.go:65] Caching tarball of preloaded images
	I1206 09:27:05.413214  405000 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:27:05.413220  405000 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:27:05.413331  405000 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/config.json ...
	I1206 09:27:05.413537  405000 start.go:360] acquireMachinesLock for functional-959292: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:27:05.413614  405000 start.go:364] duration metric: took 62.698µs to acquireMachinesLock for "functional-959292"
	I1206 09:27:05.413630  405000 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:27:05.413634  405000 fix.go:54] fixHost starting: 
	I1206 09:27:05.415678  405000 fix.go:112] recreateIfNeeded on functional-959292: state=Running err=<nil>
	W1206 09:27:05.415691  405000 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:27:05.417511  405000 out.go:252] * Updating the running kvm2 "functional-959292" VM ...
	I1206 09:27:05.417535  405000 machine.go:94] provisionDockerMachine start ...
	I1206 09:27:05.420691  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421169  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.421191  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421417  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.421668  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.421672  405000 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:27:05.530432  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.530457  405000 buildroot.go:166] provisioning hostname "functional-959292"
	I1206 09:27:05.533437  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.533923  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.533944  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.534145  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.534373  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.534380  405000 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-959292 && echo "functional-959292" | sudo tee /etc/hostname
	I1206 09:27:05.673011  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.676321  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.676815  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.676842  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.677084  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.677310  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.677325  405000 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-959292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-959292/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-959292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:27:05.790461  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:27:05.790486  405000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 09:27:05.790531  405000 buildroot.go:174] setting up certificates
	I1206 09:27:05.790542  405000 provision.go:84] configureAuth start
	I1206 09:27:05.793758  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.794112  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.794125  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.796610  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797015  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.797033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797173  405000 provision.go:143] copyHostCerts
	I1206 09:27:05.797219  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 09:27:05.797225  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 09:27:05.797294  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 09:27:05.797448  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 09:27:05.797454  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 09:27:05.797481  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 09:27:05.797559  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 09:27:05.797562  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 09:27:05.797584  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 09:27:05.797630  405000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.functional-959292 san=[127.0.0.1 192.168.39.122 functional-959292 localhost minikube]
	I1206 09:27:05.927749  405000 provision.go:177] copyRemoteCerts
	I1206 09:27:05.927805  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:27:05.930467  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.930995  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.931017  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.931182  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:06.020293  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:27:06.062999  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:27:06.103800  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:27:06.135603  405000 provision.go:87] duration metric: took 345.046364ms to configureAuth
	I1206 09:27:06.135630  405000 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:27:06.135924  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:06.138757  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139157  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:06.139176  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139330  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:06.139546  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:06.139563  405000 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:27:11.746822  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:27:11.746840  405000 machine.go:97] duration metric: took 6.329297702s to provisionDockerMachine
	I1206 09:27:11.746854  405000 start.go:293] postStartSetup for "functional-959292" (driver="kvm2")
	I1206 09:27:11.746876  405000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:27:11.746961  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:27:11.750570  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751014  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.751033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751196  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:11.837868  405000 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:27:11.843271  405000 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:27:11.843298  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 09:27:11.843387  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 09:27:11.843463  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 09:27:11.843553  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts -> hosts in /etc/test/nested/copy/396534
	I1206 09:27:11.843597  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/396534
	I1206 09:27:11.856490  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:27:11.887680  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts --> /etc/test/nested/copy/396534/hosts (40 bytes)
	I1206 09:27:11.917471  405000 start.go:296] duration metric: took 170.599577ms for postStartSetup
	I1206 09:27:11.917525  405000 fix.go:56] duration metric: took 6.503890577s for fixHost
	I1206 09:27:11.920391  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.920829  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.920843  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.921039  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:11.921236  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:11.921240  405000 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:27:12.029674  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013232.025035703
	
	I1206 09:27:12.029690  405000 fix.go:216] guest clock: 1765013232.025035703
	I1206 09:27:12.029728  405000 fix.go:229] Guest: 2025-12-06 09:27:12.025035703 +0000 UTC Remote: 2025-12-06 09:27:11.917528099 +0000 UTC m=+6.615934527 (delta=107.507604ms)
	I1206 09:27:12.029754  405000 fix.go:200] guest clock delta is within tolerance: 107.507604ms
	I1206 09:27:12.029760  405000 start.go:83] releasing machines lock for "functional-959292", held for 6.616137159s
	I1206 09:27:12.032871  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033367  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.033386  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033972  405000 ssh_runner.go:195] Run: cat /version.json
	I1206 09:27:12.034041  405000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:27:12.037021  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037356  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037372  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037454  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037528  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.037968  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037994  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.038195  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.135127  405000 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:12.178512  405000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:27:12.381531  405000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:27:12.396979  405000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:27:12.397040  405000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:27:12.420072  405000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:27:12.420094  405000 start.go:496] detecting cgroup driver to use...
	I1206 09:27:12.420194  405000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:27:12.466799  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:27:12.509494  405000 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:27:12.509562  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:27:12.561878  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:27:12.598609  405000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:27:12.872841  405000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:27:13.059884  405000 docker.go:234] disabling docker service ...
	I1206 09:27:13.059949  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:27:13.093867  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:27:13.120308  405000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:27:13.320589  405000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:27:13.498865  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:27:13.515293  405000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:27:13.538889  405000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:27:13.538948  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.551961  405000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:27:13.552020  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.565424  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.578556  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.591163  405000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:27:13.605026  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.618520  405000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.632537  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.646329  405000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:27:13.658570  405000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:27:13.670728  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:27:13.846425  405000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:28:44.155984  405000 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.309526087s)
	I1206 09:28:44.156040  405000 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:28:44.156100  405000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:28:44.162119  405000 start.go:564] Will wait 60s for crictl version
	I1206 09:28:44.162184  405000 ssh_runner.go:195] Run: which crictl
	I1206 09:28:44.166332  405000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:28:44.207039  405000 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:28:44.207130  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.238213  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.269956  405000 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1206 09:28:44.274130  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274499  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:28:44.274517  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274693  405000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:28:44.281120  405000 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 09:28:44.282155  405000 kubeadm.go:884] updating cluster {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:28:44.282326  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:28:44.282393  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.325810  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.325822  405000 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:28:44.325876  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.356541  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.356553  405000 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:28:44.356560  405000 kubeadm.go:935] updating node { 192.168.39.122 8441 v1.35.0-beta.0 crio true true} ...
	I1206 09:28:44.356678  405000 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-959292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:28:44.356770  405000 ssh_runner.go:195] Run: crio config
	I1206 09:28:44.403814  405000 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 09:28:44.403837  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:28:44.403854  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:28:44.403866  405000 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:28:44.403896  405000 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-959292 NodeName:functional-959292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:28:44.404049  405000 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-959292"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.122"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:28:44.404129  405000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:28:44.416911  405000 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:28:44.416984  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:28:44.431220  405000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1206 09:28:44.454568  405000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:28:44.475028  405000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1206 09:28:44.495506  405000 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I1206 09:28:44.499849  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:28:44.665494  405000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:28:44.684875  405000 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292 for IP: 192.168.39.122
	I1206 09:28:44.684888  405000 certs.go:195] generating shared ca certs ...
	I1206 09:28:44.684904  405000 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:28:44.685063  405000 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 09:28:44.685107  405000 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 09:28:44.685113  405000 certs.go:257] generating profile certs ...
	I1206 09:28:44.685293  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.key
	I1206 09:28:44.685367  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key.3de1f674
	I1206 09:28:44.685410  405000 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key
	I1206 09:28:44.685527  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 09:28:44.685557  405000 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 09:28:44.685563  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:28:44.685587  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:28:44.685606  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:28:44.685624  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 09:28:44.685662  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:28:44.686407  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:28:44.717857  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:28:44.748141  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:28:44.777905  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:28:44.808483  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:28:44.839184  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:28:44.869544  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:28:44.899600  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:28:44.929911  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 09:28:44.959256  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:28:44.988361  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 09:28:45.017387  405000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:28:45.038256  405000 ssh_runner.go:195] Run: openssl version
	I1206 09:28:45.047367  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.059555  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 09:28:45.071389  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076661  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076758  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.084361  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:28:45.096215  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.108030  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:28:45.119412  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124889  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124968  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.132255  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:28:45.143921  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.155198  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 09:28:45.166926  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172011  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172075  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.179097  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:28:45.190195  405000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:28:45.195680  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:28:45.203086  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:28:45.210171  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:28:45.217010  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:28:45.223948  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:28:45.230923  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:28:45.238258  405000 kubeadm.go:401] StartCluster: {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:28:45.238386  405000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:28:45.238444  405000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:28:45.272278  405000 cri.go:89] found id: "3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db"
	I1206 09:28:45.272295  405000 cri.go:89] found id: "7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799"
	I1206 09:28:45.272300  405000 cri.go:89] found id: "a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab"
	I1206 09:28:45.272304  405000 cri.go:89] found id: "6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6"
	I1206 09:28:45.272307  405000 cri.go:89] found id: "422ce5b897d2b576b825cdca2cb0d613bfe2c99b74fe8984cd5904f6702c11f5"
	I1206 09:28:45.272311  405000 cri.go:89] found id: "f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3"
	I1206 09:28:45.272314  405000 cri.go:89] found id: "db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902"
	I1206 09:28:45.272317  405000 cri.go:89] found id: ""
	I1206 09:28:45.272395  405000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b968x (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b968x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-xzgh7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxsl7 (ro)
	Volumes:
	  kube-api-access-hxsl7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-bbj44
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5nkl (ro)
	Volumes:
	  kube-api-access-k5nkl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-844cf969f6-f88x4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bd5 (ro)
	Volumes:
	  kube-api-access-49bd5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtzk (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6dtzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (603.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (367.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7c30f196-67d2-42c3-bce2-de37e892b354] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003942082s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-959292 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-959292 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-959292 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-959292 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:33:08.384587  396534 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [394d734d-f558-4dfe-a0ae-83a9990d8fd9] Pending
E1206 09:34:04.546939  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:34:28.983218  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:34:32.252831  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:35:52.052303  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:39:04.546288  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-06 09:39:08.623342247 +0000 UTC m=+1652.032584130
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-959292 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-959292 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Image:        docker.io/nginx
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtzk (ro)
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-6dtzk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-959292 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-959292 logs sp-pod -n default:
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959292 -n functional-959292
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs -n 25: (1.205788153s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-310626 image ls --format yaml --alsologtostderr                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ ssh     │ functional-310626 ssh pgrep buildkitd                                                                                                           │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │                     │
	│ ssh     │ functional-959292 ssh -n functional-959292 sudo cat /home/docker/cp-test.txt                                                                    │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ config  │ functional-959292 config get cpus                                                                                                               │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image   │ functional-310626 image ls --format json --alsologtostderr                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr                                          │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ config  │ functional-959292 config unset cpus                                                                                                             │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image   │ functional-310626 image ls --format table --alsologtostderr                                                                                     │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image ls                                                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ delete  │ -p functional-310626                                                                                                                            │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ start   │ -p functional-959292 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:26 UTC │
	│ start   │ -p functional-959292 --alsologtostderr -v=8                                                                                                     │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:latest                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache add minikube-local-cache-test:functional-959292                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache delete minikube-local-cache-test:functional-959292                                                                      │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh -n functional-959292 sudo cat /tmp/does/not/exist/cp-test.txt                                                             │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh echo hello                                                                                                                │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh cat /etc/hostname                                                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ addons  │ functional-959292 addons list                                                                                                                   │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ addons  │ functional-959292 addons list -o json                                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:05.354163  405000 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:05.354265  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354269  405000 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:05.354271  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354498  405000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:27:05.355029  405000 out.go:368] Setting JSON to false
	I1206 09:27:05.356004  405000 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4165,"bootTime":1765009060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:05.356057  405000 start.go:143] virtualization: kvm guest
	I1206 09:27:05.358398  405000 out.go:179] * [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:05.359753  405000 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:05.359781  405000 notify.go:221] Checking for updates...
	I1206 09:27:05.362220  405000 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:05.363383  405000 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:27:05.367950  405000 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:27:05.369254  405000 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:05.370459  405000 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:05.372160  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:05.372262  405000 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:05.406948  405000 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:27:05.408573  405000 start.go:309] selected driver: kvm2
	I1206 09:27:05.408583  405000 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.408701  405000 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:05.409629  405000 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:27:05.409658  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:27:05.409742  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:27:05.409790  405000 start.go:353] cluster config:
	{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.409882  405000 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:27:05.411683  405000 out.go:179] * Starting "functional-959292" primary control-plane node in "functional-959292" cluster
	I1206 09:27:05.413086  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:27:05.413115  405000 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:27:05.413122  405000 cache.go:65] Caching tarball of preloaded images
	I1206 09:27:05.413214  405000 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:27:05.413220  405000 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:27:05.413331  405000 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/config.json ...
	I1206 09:27:05.413537  405000 start.go:360] acquireMachinesLock for functional-959292: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:27:05.413614  405000 start.go:364] duration metric: took 62.698µs to acquireMachinesLock for "functional-959292"
	I1206 09:27:05.413630  405000 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:27:05.413634  405000 fix.go:54] fixHost starting: 
	I1206 09:27:05.415678  405000 fix.go:112] recreateIfNeeded on functional-959292: state=Running err=<nil>
	W1206 09:27:05.415691  405000 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:27:05.417511  405000 out.go:252] * Updating the running kvm2 "functional-959292" VM ...
	I1206 09:27:05.417535  405000 machine.go:94] provisionDockerMachine start ...
	I1206 09:27:05.420691  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421169  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.421191  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421417  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.421668  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.421672  405000 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:27:05.530432  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.530457  405000 buildroot.go:166] provisioning hostname "functional-959292"
	I1206 09:27:05.533437  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.533923  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.533944  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.534145  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.534373  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.534380  405000 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-959292 && echo "functional-959292" | sudo tee /etc/hostname
	I1206 09:27:05.673011  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.676321  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.676815  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.676842  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.677084  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.677310  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.677325  405000 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-959292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-959292/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-959292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:27:05.790461  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:27:05.790486  405000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 09:27:05.790531  405000 buildroot.go:174] setting up certificates
	I1206 09:27:05.790542  405000 provision.go:84] configureAuth start
	I1206 09:27:05.793758  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.794112  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.794125  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.796610  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797015  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.797033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797173  405000 provision.go:143] copyHostCerts
	I1206 09:27:05.797219  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 09:27:05.797225  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 09:27:05.797294  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 09:27:05.797448  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 09:27:05.797454  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 09:27:05.797481  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 09:27:05.797559  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 09:27:05.797562  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 09:27:05.797584  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 09:27:05.797630  405000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.functional-959292 san=[127.0.0.1 192.168.39.122 functional-959292 localhost minikube]
	I1206 09:27:05.927749  405000 provision.go:177] copyRemoteCerts
	I1206 09:27:05.927805  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:27:05.930467  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.930995  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.931017  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.931182  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:06.020293  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:27:06.062999  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:27:06.103800  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:27:06.135603  405000 provision.go:87] duration metric: took 345.046364ms to configureAuth
	I1206 09:27:06.135630  405000 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:27:06.135924  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:06.138757  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139157  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:06.139176  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139330  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:06.139546  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:06.139563  405000 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:27:11.746822  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:27:11.746840  405000 machine.go:97] duration metric: took 6.329297702s to provisionDockerMachine
	I1206 09:27:11.746854  405000 start.go:293] postStartSetup for "functional-959292" (driver="kvm2")
	I1206 09:27:11.746876  405000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:27:11.746961  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:27:11.750570  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751014  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.751033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751196  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:11.837868  405000 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:27:11.843271  405000 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:27:11.843298  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 09:27:11.843387  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 09:27:11.843463  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 09:27:11.843553  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts -> hosts in /etc/test/nested/copy/396534
	I1206 09:27:11.843597  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/396534
	I1206 09:27:11.856490  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:27:11.887680  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts --> /etc/test/nested/copy/396534/hosts (40 bytes)
	I1206 09:27:11.917471  405000 start.go:296] duration metric: took 170.599577ms for postStartSetup
	I1206 09:27:11.917525  405000 fix.go:56] duration metric: took 6.503890577s for fixHost
	I1206 09:27:11.920391  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.920829  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.920843  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.921039  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:11.921236  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:11.921240  405000 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:27:12.029674  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013232.025035703
	
	I1206 09:27:12.029690  405000 fix.go:216] guest clock: 1765013232.025035703
	I1206 09:27:12.029728  405000 fix.go:229] Guest: 2025-12-06 09:27:12.025035703 +0000 UTC Remote: 2025-12-06 09:27:11.917528099 +0000 UTC m=+6.615934527 (delta=107.507604ms)
	I1206 09:27:12.029754  405000 fix.go:200] guest clock delta is within tolerance: 107.507604ms
	I1206 09:27:12.029760  405000 start.go:83] releasing machines lock for "functional-959292", held for 6.616137159s
	I1206 09:27:12.032871  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033367  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.033386  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033972  405000 ssh_runner.go:195] Run: cat /version.json
	I1206 09:27:12.034041  405000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:27:12.037021  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037356  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037372  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037454  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037528  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.037968  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037994  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.038195  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.135127  405000 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:12.178512  405000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:27:12.381531  405000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:27:12.396979  405000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:27:12.397040  405000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:27:12.420072  405000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:27:12.420094  405000 start.go:496] detecting cgroup driver to use...
	I1206 09:27:12.420194  405000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:27:12.466799  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:27:12.509494  405000 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:27:12.509562  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:27:12.561878  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:27:12.598609  405000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:27:12.872841  405000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:27:13.059884  405000 docker.go:234] disabling docker service ...
	I1206 09:27:13.059949  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:27:13.093867  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:27:13.120308  405000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:27:13.320589  405000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:27:13.498865  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:27:13.515293  405000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:27:13.538889  405000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:27:13.538948  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.551961  405000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:27:13.552020  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.565424  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.578556  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.591163  405000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:27:13.605026  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.618520  405000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.632537  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.646329  405000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:27:13.658570  405000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:27:13.670728  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:27:13.846425  405000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:28:44.155984  405000 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.309526087s)
	I1206 09:28:44.156040  405000 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:28:44.156100  405000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:28:44.162119  405000 start.go:564] Will wait 60s for crictl version
	I1206 09:28:44.162184  405000 ssh_runner.go:195] Run: which crictl
	I1206 09:28:44.166332  405000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:28:44.207039  405000 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:28:44.207130  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.238213  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.269956  405000 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1206 09:28:44.274130  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274499  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:28:44.274517  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274693  405000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:28:44.281120  405000 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 09:28:44.282155  405000 kubeadm.go:884] updating cluster {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:28:44.282326  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:28:44.282393  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.325810  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.325822  405000 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:28:44.325876  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.356541  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.356553  405000 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:28:44.356560  405000 kubeadm.go:935] updating node { 192.168.39.122 8441 v1.35.0-beta.0 crio true true} ...
	I1206 09:28:44.356678  405000 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-959292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:28:44.356770  405000 ssh_runner.go:195] Run: crio config
	I1206 09:28:44.403814  405000 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 09:28:44.403837  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:28:44.403854  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:28:44.403866  405000 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:28:44.403896  405000 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-959292 NodeName:functional-959292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:28:44.404049  405000 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-959292"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.122"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:28:44.404129  405000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:28:44.416911  405000 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:28:44.416984  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:28:44.431220  405000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1206 09:28:44.454568  405000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:28:44.475028  405000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1206 09:28:44.495506  405000 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I1206 09:28:44.499849  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:28:44.665494  405000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:28:44.684875  405000 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292 for IP: 192.168.39.122
	I1206 09:28:44.684888  405000 certs.go:195] generating shared ca certs ...
	I1206 09:28:44.684904  405000 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:28:44.685063  405000 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 09:28:44.685107  405000 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 09:28:44.685113  405000 certs.go:257] generating profile certs ...
	I1206 09:28:44.685293  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.key
	I1206 09:28:44.685367  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key.3de1f674
	I1206 09:28:44.685410  405000 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key
	I1206 09:28:44.685527  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 09:28:44.685557  405000 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 09:28:44.685563  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:28:44.685587  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:28:44.685606  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:28:44.685624  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 09:28:44.685662  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:28:44.686407  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:28:44.717857  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:28:44.748141  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:28:44.777905  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:28:44.808483  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:28:44.839184  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:28:44.869544  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:28:44.899600  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:28:44.929911  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 09:28:44.959256  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:28:44.988361  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 09:28:45.017387  405000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:28:45.038256  405000 ssh_runner.go:195] Run: openssl version
	I1206 09:28:45.047367  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.059555  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 09:28:45.071389  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076661  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076758  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.084361  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:28:45.096215  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.108030  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:28:45.119412  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124889  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124968  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.132255  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:28:45.143921  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.155198  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 09:28:45.166926  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172011  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172075  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.179097  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:28:45.190195  405000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:28:45.195680  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:28:45.203086  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:28:45.210171  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:28:45.217010  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:28:45.223948  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:28:45.230923  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:28:45.238258  405000 kubeadm.go:401] StartCluster: {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:28:45.238386  405000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:28:45.238444  405000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:28:45.272278  405000 cri.go:89] found id: "3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db"
	I1206 09:28:45.272295  405000 cri.go:89] found id: "7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799"
	I1206 09:28:45.272300  405000 cri.go:89] found id: "a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab"
	I1206 09:28:45.272304  405000 cri.go:89] found id: "6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6"
	I1206 09:28:45.272307  405000 cri.go:89] found id: "422ce5b897d2b576b825cdca2cb0d613bfe2c99b74fe8984cd5904f6702c11f5"
	I1206 09:28:45.272311  405000 cri.go:89] found id: "f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3"
	I1206 09:28:45.272314  405000 cri.go:89] found id: "db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902"
	I1206 09:28:45.272317  405000 cri.go:89] found id: ""
	I1206 09:28:45.272395  405000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-959292 describe pod hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-959292 describe pod hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-5758569b79-xzgh7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxsl7 (ro)
	Volumes:
	  kube-api-access-hxsl7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-bbj44
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5nkl (ro)
	Volumes:
	  kube-api-access-k5nkl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-844cf969f6-f88x4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bd5 (ro)
	Volumes:
	  kube-api-access-49bd5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtzk (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6dtzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (367.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-959292 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-f88x4" [8a5d4dc7-0236-44e5-a87c-942e02d3d931] Pending
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-06 09:43:02.614627999 +0000 UTC m=+1886.023869882
functional_test.go:1804: (dbg) Run:  kubectl --context functional-959292 describe po mysql-844cf969f6-f88x4 -n default
functional_test.go:1804: (dbg) kubectl --context functional-959292 describe po mysql-844cf969f6-f88x4 -n default:
Name:             mysql-844cf969f6-f88x4
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Image:      docker.io/mysql:5.7
Port:       3306/TCP (mysql)
Host Port:  0/TCP (mysql)
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bd5 (ro)
Volumes:
kube-api-access-49bd5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1804: (dbg) Run:  kubectl --context functional-959292 logs mysql-844cf969f6-f88x4 -n default
functional_test.go:1804: (dbg) kubectl --context functional-959292 logs mysql-844cf969f6-f88x4 -n default:
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-959292 -n functional-959292
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs -n 25: (1.494989375s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-310626 image ls --format table --alsologtostderr                                                                                     │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ image   │ functional-310626 image ls                                                                                                                      │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ delete  │ -p functional-310626                                                                                                                            │ functional-310626 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:24 UTC │
	│ start   │ -p functional-959292 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:24 UTC │ 06 Dec 25 09:26 UTC │
	│ start   │ -p functional-959292 --alsologtostderr -v=8                                                                                                     │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:26 UTC │
	│ cache   │ functional-959292 cache add registry.k8s.io/pause:latest                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:26 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache add minikube-local-cache-test:functional-959292                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ functional-959292 cache delete minikube-local-cache-test:functional-959292                                                                      │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh     │ functional-959292 ssh -n functional-959292 sudo cat /tmp/does/not/exist/cp-test.txt                                                             │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh echo hello                                                                                                                │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh     │ functional-959292 ssh cat /etc/hostname                                                                                                         │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ addons  │ functional-959292 addons list                                                                                                                   │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ addons  │ functional-959292 addons list -o json                                                                                                           │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ license │                                                                                                                                                 │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ mount   │ -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001:/mount-9p --alsologtostderr -v=1          │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │                     │
	│ ssh     │ functional-959292 ssh findmnt -T /mount-9p | grep 9p                                                                                            │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ ssh     │ functional-959292 ssh -- ls -la /mount-9p                                                                                                       │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ ssh     │ functional-959292 ssh cat /mount-9p/test-1765013952684701104                                                                                    │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:39 UTC │ 06 Dec 25 09:39 UTC │
	│ service │ functional-959292 service list                                                                                                                  │ functional-959292 │ jenkins │ v1.37.0 │ 06 Dec 25 09:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:05.354163  405000 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:05.354265  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354269  405000 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:05.354271  405000 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:05.354498  405000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:27:05.355029  405000 out.go:368] Setting JSON to false
	I1206 09:27:05.356004  405000 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4165,"bootTime":1765009060,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:05.356057  405000 start.go:143] virtualization: kvm guest
	I1206 09:27:05.358398  405000 out.go:179] * [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:05.359753  405000 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:05.359781  405000 notify.go:221] Checking for updates...
	I1206 09:27:05.362220  405000 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:05.363383  405000 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:27:05.367950  405000 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:27:05.369254  405000 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:05.370459  405000 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:05.372160  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:05.372262  405000 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:05.406948  405000 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:27:05.408573  405000 start.go:309] selected driver: kvm2
	I1206 09:27:05.408583  405000 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.408701  405000 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:05.409629  405000 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:27:05.409658  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:27:05.409742  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:27:05.409790  405000 start.go:353] cluster config:
	{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:05.409882  405000 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:27:05.411683  405000 out.go:179] * Starting "functional-959292" primary control-plane node in "functional-959292" cluster
	I1206 09:27:05.413086  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:27:05.413115  405000 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:27:05.413122  405000 cache.go:65] Caching tarball of preloaded images
	I1206 09:27:05.413214  405000 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 09:27:05.413220  405000 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:27:05.413331  405000 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/config.json ...
	I1206 09:27:05.413537  405000 start.go:360] acquireMachinesLock for functional-959292: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 09:27:05.413614  405000 start.go:364] duration metric: took 62.698µs to acquireMachinesLock for "functional-959292"
	I1206 09:27:05.413630  405000 start.go:96] Skipping create...Using existing machine configuration
	I1206 09:27:05.413634  405000 fix.go:54] fixHost starting: 
	I1206 09:27:05.415678  405000 fix.go:112] recreateIfNeeded on functional-959292: state=Running err=<nil>
	W1206 09:27:05.415691  405000 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 09:27:05.417511  405000 out.go:252] * Updating the running kvm2 "functional-959292" VM ...
	I1206 09:27:05.417535  405000 machine.go:94] provisionDockerMachine start ...
	I1206 09:27:05.420691  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421169  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.421191  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.421417  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.421668  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.421672  405000 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:27:05.530432  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.530457  405000 buildroot.go:166] provisioning hostname "functional-959292"
	I1206 09:27:05.533437  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.533923  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.533944  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.534145  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.534373  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.534380  405000 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-959292 && echo "functional-959292" | sudo tee /etc/hostname
	I1206 09:27:05.673011  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-959292
	
	I1206 09:27:05.676321  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.676815  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.676842  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.677084  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:05.677310  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:05.677325  405000 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-959292' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-959292/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-959292' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:27:05.790461  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:27:05.790486  405000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 09:27:05.790531  405000 buildroot.go:174] setting up certificates
	I1206 09:27:05.790542  405000 provision.go:84] configureAuth start
	I1206 09:27:05.793758  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.794112  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.794125  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.796610  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797015  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.797033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.797173  405000 provision.go:143] copyHostCerts
	I1206 09:27:05.797219  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 09:27:05.797225  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 09:27:05.797294  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 09:27:05.797448  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 09:27:05.797454  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 09:27:05.797481  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 09:27:05.797559  405000 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 09:27:05.797562  405000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 09:27:05.797584  405000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 09:27:05.797630  405000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.functional-959292 san=[127.0.0.1 192.168.39.122 functional-959292 localhost minikube]
	I1206 09:27:05.927749  405000 provision.go:177] copyRemoteCerts
	I1206 09:27:05.927805  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:27:05.930467  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.930995  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:05.931017  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:05.931182  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:06.020293  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:27:06.062999  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 09:27:06.103800  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:27:06.135603  405000 provision.go:87] duration metric: took 345.046364ms to configureAuth
	I1206 09:27:06.135630  405000 buildroot.go:189] setting minikube options for container-runtime
	I1206 09:27:06.135924  405000 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:06.138757  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139157  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:06.139176  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:06.139330  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:06.139546  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:06.139563  405000 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 09:27:11.746822  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 09:27:11.746840  405000 machine.go:97] duration metric: took 6.329297702s to provisionDockerMachine
	I1206 09:27:11.746854  405000 start.go:293] postStartSetup for "functional-959292" (driver="kvm2")
	I1206 09:27:11.746876  405000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:27:11.746961  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:27:11.750570  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751014  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.751033  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.751196  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:11.837868  405000 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:27:11.843271  405000 info.go:137] Remote host: Buildroot 2025.02
	I1206 09:27:11.843298  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 09:27:11.843387  405000 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 09:27:11.843463  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 09:27:11.843553  405000 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts -> hosts in /etc/test/nested/copy/396534
	I1206 09:27:11.843597  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/396534
	I1206 09:27:11.856490  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:27:11.887680  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts --> /etc/test/nested/copy/396534/hosts (40 bytes)
	I1206 09:27:11.917471  405000 start.go:296] duration metric: took 170.599577ms for postStartSetup
	I1206 09:27:11.917525  405000 fix.go:56] duration metric: took 6.503890577s for fixHost
	I1206 09:27:11.920391  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.920829  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:11.920843  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:11.921039  405000 main.go:143] libmachine: Using SSH client type: native
	I1206 09:27:11.921236  405000 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I1206 09:27:11.921240  405000 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 09:27:12.029674  405000 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765013232.025035703
	
	I1206 09:27:12.029690  405000 fix.go:216] guest clock: 1765013232.025035703
	I1206 09:27:12.029728  405000 fix.go:229] Guest: 2025-12-06 09:27:12.025035703 +0000 UTC Remote: 2025-12-06 09:27:11.917528099 +0000 UTC m=+6.615934527 (delta=107.507604ms)
	I1206 09:27:12.029754  405000 fix.go:200] guest clock delta is within tolerance: 107.507604ms
	I1206 09:27:12.029760  405000 start.go:83] releasing machines lock for "functional-959292", held for 6.616137159s
	I1206 09:27:12.032871  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033367  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.033386  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.033972  405000 ssh_runner.go:195] Run: cat /version.json
	I1206 09:27:12.034041  405000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:27:12.037021  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037356  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037372  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037454  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.037528  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.037968  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:27:12.037994  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:27:12.038195  405000 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
	I1206 09:27:12.135127  405000 ssh_runner.go:195] Run: systemctl --version
	I1206 09:27:12.178512  405000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 09:27:12.381531  405000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:27:12.396979  405000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:27:12.397040  405000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:27:12.420072  405000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 09:27:12.420094  405000 start.go:496] detecting cgroup driver to use...
	I1206 09:27:12.420194  405000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:27:12.466799  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:27:12.509494  405000 docker.go:218] disabling cri-docker service (if available) ...
	I1206 09:27:12.509562  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 09:27:12.561878  405000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 09:27:12.598609  405000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 09:27:12.872841  405000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 09:27:13.059884  405000 docker.go:234] disabling docker service ...
	I1206 09:27:13.059949  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 09:27:13.093867  405000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 09:27:13.120308  405000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 09:27:13.320589  405000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 09:27:13.498865  405000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:27:13.515293  405000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:27:13.538889  405000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 09:27:13.538948  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.551961  405000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 09:27:13.552020  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.565424  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.578556  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.591163  405000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:27:13.605026  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.618520  405000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.632537  405000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 09:27:13.646329  405000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:27:13.658570  405000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:27:13.670728  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:27:13.846425  405000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 09:28:44.155984  405000 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.309526087s)
	I1206 09:28:44.156040  405000 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 09:28:44.156100  405000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 09:28:44.162119  405000 start.go:564] Will wait 60s for crictl version
	I1206 09:28:44.162184  405000 ssh_runner.go:195] Run: which crictl
	I1206 09:28:44.166332  405000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 09:28:44.207039  405000 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 09:28:44.207130  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.238213  405000 ssh_runner.go:195] Run: crio --version
	I1206 09:28:44.269956  405000 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1206 09:28:44.274130  405000 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274499  405000 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
	I1206 09:28:44.274517  405000 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
	I1206 09:28:44.274693  405000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 09:28:44.281120  405000 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 09:28:44.282155  405000 kubeadm.go:884] updating cluster {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:28:44.282326  405000 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:28:44.282393  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.325810  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.325822  405000 crio.go:433] Images already preloaded, skipping extraction
	I1206 09:28:44.325876  405000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 09:28:44.356541  405000 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 09:28:44.356553  405000 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:28:44.356560  405000 kubeadm.go:935] updating node { 192.168.39.122 8441 v1.35.0-beta.0 crio true true} ...
	I1206 09:28:44.356678  405000 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-959292 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:28:44.356770  405000 ssh_runner.go:195] Run: crio config
	I1206 09:28:44.403814  405000 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 09:28:44.403837  405000 cni.go:84] Creating CNI manager for ""
	I1206 09:28:44.403854  405000 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:28:44.403866  405000 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:28:44.403896  405000 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-959292 NodeName:functional-959292 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:28:44.404049  405000 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-959292"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.122"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:28:44.404129  405000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 09:28:44.416911  405000 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:28:44.416984  405000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:28:44.431220  405000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1206 09:28:44.454568  405000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 09:28:44.475028  405000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1206 09:28:44.495506  405000 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I1206 09:28:44.499849  405000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:28:44.665494  405000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:28:44.684875  405000 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292 for IP: 192.168.39.122
	I1206 09:28:44.684888  405000 certs.go:195] generating shared ca certs ...
	I1206 09:28:44.684904  405000 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:28:44.685063  405000 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 09:28:44.685107  405000 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 09:28:44.685113  405000 certs.go:257] generating profile certs ...
	I1206 09:28:44.685293  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.key
	I1206 09:28:44.685367  405000 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key.3de1f674
	I1206 09:28:44.685410  405000 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key
	I1206 09:28:44.685527  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 09:28:44.685557  405000 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 09:28:44.685563  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 09:28:44.685587  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:28:44.685606  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:28:44.685624  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 09:28:44.685662  405000 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 09:28:44.686407  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:28:44.717857  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:28:44.748141  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:28:44.777905  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 09:28:44.808483  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 09:28:44.839184  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 09:28:44.869544  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:28:44.899600  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 09:28:44.929911  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 09:28:44.959256  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:28:44.988361  405000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 09:28:45.017387  405000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:28:45.038256  405000 ssh_runner.go:195] Run: openssl version
	I1206 09:28:45.047367  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.059555  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 09:28:45.071389  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076661  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.076758  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 09:28:45.084361  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 09:28:45.096215  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.108030  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:28:45.119412  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124889  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.124968  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:28:45.132255  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:28:45.143921  405000 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.155198  405000 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 09:28:45.166926  405000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172011  405000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.172075  405000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 09:28:45.179097  405000 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 09:28:45.190195  405000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:28:45.195680  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 09:28:45.203086  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 09:28:45.210171  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 09:28:45.217010  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 09:28:45.223948  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 09:28:45.230923  405000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 09:28:45.238258  405000 kubeadm.go:401] StartCluster: {Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:28:45.238386  405000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 09:28:45.238444  405000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 09:28:45.272278  405000 cri.go:89] found id: "3cf20b6d798d084267897441ec085d74b908c5ce44f5a0104461078dade3e3db"
	I1206 09:28:45.272295  405000 cri.go:89] found id: "7e0e61506a238944c5ca29e0e8cd96198ed5f63ae148b71a98b24338f8cec799"
	I1206 09:28:45.272300  405000 cri.go:89] found id: "a7a0409ceca2bb30bc27bd580d3e96626e7b2fcec3e9bc911aba8663b88b14ab"
	I1206 09:28:45.272304  405000 cri.go:89] found id: "6e5074c405f22f240aeee9223542f189d50079b24d71ecc6920bdadbd0ba3be6"
	I1206 09:28:45.272307  405000 cri.go:89] found id: "422ce5b897d2b576b825cdca2cb0d613bfe2c99b74fe8984cd5904f6702c11f5"
	I1206 09:28:45.272311  405000 cri.go:89] found id: "f007b54f29b7c249c166e8323973f208279a7e516813e500f58a370519efedc3"
	I1206 09:28:45.272314  405000 cri.go:89] found id: "db52b0948589f2ebba355737cd876e78592bcd1b1561e85c9b037a02e9276902"
	I1206 09:28:45.272317  405000 cri.go:89] found id: ""
	I1206 09:28:45.272395  405000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
helpers_test.go:269: (dbg) Run:  kubectl --context functional-959292 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-959292 describe pod busybox-mount hello-node-5758569b79-xzgh7 hello-node-connect-9f67c86d4-bbj44 mysql-844cf969f6-f88x4 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b968x (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b968x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-xzgh7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxsl7 (ro)
	Volumes:
	  kube-api-access-hxsl7:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-bbj44
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5nkl (ro)
	Volumes:
	  kube-api-access-k5nkl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-844cf969f6-f88x4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Image:      docker.io/mysql:5.7
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bd5 (ro)
	Volumes:
	  kube-api-access-49bd5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6dtzk (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-6dtzk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-959292 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-959292 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-xzgh7" [b4acadfe-2957-4b58-a0b0-5574306d32c6] Pending
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-06 09:43:02.74874182 +0000 UTC m=+1886.157983713
functional_test.go:1460: (dbg) Run:  kubectl --context functional-959292 describe po hello-node-5758569b79-xzgh7 -n default
functional_test.go:1460: (dbg) kubectl --context functional-959292 describe po hello-node-5758569b79-xzgh7 -n default:
Name:             hello-node-5758569b79-xzgh7
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hxsl7 (ro)
Volumes:
kube-api-access-hxsl7:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1460: (dbg) Run:  kubectl --context functional-959292 logs hello-node-5758569b79-xzgh7 -n default
functional_test.go:1460: (dbg) kubectl --context functional-959292 logs hello-node-5758569b79-xzgh7 -n default:
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (242.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765013952684701104" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765013952684701104" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765013952684701104" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001/test-1765013952684701104
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (155.331791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:39:12.840451  396534 retry.go:31] will retry after 625.515103ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:39 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:39 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:39 test-1765013952684701104
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh cat /mount-9p/test-1765013952684701104
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-959292 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d7f08e10-7055-4cd5-8c96-be77be465b5a] Pending
E1206 09:39:28.983115  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: WARNING: pod list for "default" "integration-test=busybox-mount" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_mount_test.go:153: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: context deadline exceeded ****
functional_test_mount_test.go:153: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-959292 -n functional-959292
functional_test_mount_test.go:153: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: showing logs for failed pods as of 2025-12-06 09:43:14.319488183 +0000 UTC m=+1897.728730091
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-959292 describe po busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-959292 describe po busybox-mount -n default:
Name:             busybox-mount
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox-mount
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
mount-munger:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
Environment:  <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b968x (ro)
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-b968x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-959292 logs busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-959292 logs busybox-mount -n default:
functional_test_mount_test.go:154: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: context deadline exceeded
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (171.125117ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=1000,access=any,msize=262144,trans=tcp,noextend,port=36427)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  6 09:39 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  6 09:39 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  6 09:39 test-1765013952684701104
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-959292 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:36427
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001:/mount-9p --alsologtostderr -v=1] stderr:
I1206 09:39:12.743745  408214 out.go:360] Setting OutFile to fd 1 ...
I1206 09:39:12.743911  408214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:39:12.743924  408214 out.go:374] Setting ErrFile to fd 2...
I1206 09:39:12.743930  408214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:39:12.744216  408214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:39:12.744537  408214 mustload.go:66] Loading cluster: functional-959292
I1206 09:39:12.745082  408214 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:39:12.747494  408214 host.go:66] Checking if "functional-959292" exists ...
I1206 09:39:12.750960  408214 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:39:12.751393  408214 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:39:12.751436  408214 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:39:12.756320  408214 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001 into VM as /mount-9p ...
I1206 09:39:12.757606  408214 out.go:179]   - Mount type:   9p
I1206 09:39:12.758865  408214 out.go:179]   - User ID:      docker
I1206 09:39:12.760101  408214 out.go:179]   - Group ID:     docker
I1206 09:39:12.761755  408214 out.go:179]   - Version:      9p2000.L
I1206 09:39:12.765219  408214 out.go:179]   - Message Size: 262144
I1206 09:39:12.766463  408214 out.go:179]   - Options:      map[]
I1206 09:39:12.767695  408214 out.go:179]   - Bind Address: 192.168.39.1:36427
I1206 09:39:12.769194  408214 out.go:179] * Userspace file server: 
I1206 09:39:12.769388  408214 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 09:39:12.772933  408214 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:39:12.773424  408214 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:39:12.773453  408214 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:39:12.773610  408214 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:39:12.856677  408214 mount.go:180] unmount for /mount-9p ran successfully
I1206 09:39:12.856720  408214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1206 09:39:12.870770  408214 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36427,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I1206 09:39:12.904136  408214 main.go:127] stdlog: ufs.go:141 connected
I1206 09:39:12.904357  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tversion tag 65535 msize 262144 version '9P2000.L'
I1206 09:39:12.904436  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rversion tag 65535 msize 262144 version '9P2000'
I1206 09:39:12.905173  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1206 09:39:12.905253  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rattach tag 0 aqid (20fa309 f30798ab 'd')
I1206 09:39:12.906126  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 0
I1206 09:39:12.906284  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa309 f30798ab 'd') m d775 at 0 mt 1765013952 l 4096 t 0 d 0 ext )
I1206 09:39:12.906594  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 0
I1206 09:39:12.906693  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa309 f30798ab 'd') m d775 at 0 mt 1765013952 l 4096 t 0 d 0 ext )
I1206 09:39:12.909446  408214 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/.mount-process: {Name:mk423c886b5a42e7eb084886d1f1dd8d69b1d394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:39:12.909684  408214 mount.go:105] mount successful: ""
I1206 09:39:12.911970  408214 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1237853046/001 to /mount-9p
I1206 09:39:12.913443  408214 out.go:203] 
I1206 09:39:12.914980  408214 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1206 09:39:13.779449  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 0
I1206 09:39:13.779608  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa309 f30798ab 'd') m d775 at 0 mt 1765013952 l 4096 t 0 d 0 ext )
I1206 09:39:13.781316  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 1 
I1206 09:39:13.781371  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 
I1206 09:39:13.781667  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Topen tag 0 fid 1 mode 0
I1206 09:39:13.781794  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Ropen tag 0 qid (20fa309 f30798ab 'd') iounit 0
I1206 09:39:13.782053  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 0
I1206 09:39:13.782161  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa309 f30798ab 'd') m d775 at 0 mt 1765013952 l 4096 t 0 d 0 ext )
I1206 09:39:13.782426  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 0 count 262120
I1206 09:39:13.782645  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 258
I1206 09:39:13.782906  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 258 count 261862
I1206 09:39:13.782939  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:39:13.783244  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:39:13.783293  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:39:13.783583  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 09:39:13.783640  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30c f30798ab '') 
I1206 09:39:13.783942  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.784067  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30c f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.784318  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.784434  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30c f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.784686  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.784756  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.785007  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 09:39:13.785046  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30c f30798ab '') 
I1206 09:39:13.785309  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.785423  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30c f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.785656  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.785687  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.785963  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 09:39:13.786034  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30b f30798ab '') 
I1206 09:39:13.786275  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.786380  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30b f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.786600  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.786693  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30b f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.786998  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.787034  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.787233  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 09:39:13.787280  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30b f30798ab '') 
I1206 09:39:13.787531  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.787621  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30b f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.787944  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.787986  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.788322  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'test-1765013952684701104' 
I1206 09:39:13.788372  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30d f30798ab '') 
I1206 09:39:13.788670  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.788782  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.789011  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.789109  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.789347  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.789374  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.789592  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'test-1765013952684701104' 
I1206 09:39:13.789633  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30d f30798ab '') 
I1206 09:39:13.789861  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:39:13.789947  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.790252  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.790283  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.790500  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:39:13.790543  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:39:13.790756  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 1
I1206 09:39:13.790797  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.952044  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 1 0:'test-1765013952684701104' 
I1206 09:39:13.952148  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30d f30798ab '') 
I1206 09:39:13.952381  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 1
I1206 09:39:13.952509  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.952787  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 1 newfid 2 
I1206 09:39:13.952835  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 
I1206 09:39:13.953054  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Topen tag 0 fid 2 mode 0
I1206 09:39:13.953114  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Ropen tag 0 qid (20fa30d f30798ab '') iounit 0
I1206 09:39:13.953313  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 1
I1206 09:39:13.953422  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:39:13.953647  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 2 offset 0 count 262120
I1206 09:39:13.953699  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 24
I1206 09:39:13.953870  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 2 offset 24 count 262120
I1206 09:39:13.953906  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:39:13.954285  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 2 offset 24 count 262120
I1206 09:39:13.954318  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:39:13.954535  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:39:13.954583  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:39:13.954756  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 1
I1206 09:39:13.954786  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.601832  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 0
I1206 09:43:14.602033  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa309 f30798ab 'd') m d775 at 0 mt 1765013952 l 4096 t 0 d 0 ext )
I1206 09:43:14.604103  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 1 
I1206 09:43:14.604172  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 
I1206 09:43:14.604626  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Topen tag 0 fid 1 mode 0
I1206 09:43:14.604699  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Ropen tag 0 qid (20fa309 f30798ab 'd') iounit 0
I1206 09:43:14.605125  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 0
I1206 09:43:14.605279  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa309 f30798ab 'd') m d775 at 0 mt 1765013952 l 4096 t 0 d 0 ext )
I1206 09:43:14.605749  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 0 count 262120
I1206 09:43:14.605959  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 258
I1206 09:43:14.606271  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 1 fid 1 offset 258 count 261862
I1206 09:43:14.606320  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 1 count 0
I1206 09:43:14.606599  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:43:14.606637  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:43:14.606908  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 09:43:14.606963  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30c f30798ab '') 
I1206 09:43:14.607188  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.607285  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30c f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.607565  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.607679  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30c f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.607950  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:43:14.607979  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.608195  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 09:43:14.608238  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30c f30798ab '') 
I1206 09:43:14.608534  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.608630  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa30c f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.608916  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:43:14.608947  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.609241  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 09:43:14.609280  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30b f30798ab '') 
I1206 09:43:14.609460  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.609549  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30b f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.609842  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.609922  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30b f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.610186  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:43:14.610216  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.610511  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 09:43:14.610554  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30b f30798ab '') 
I1206 09:43:14.610806  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.610911  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa30b f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.611159  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:43:14.611184  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.611409  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'test-1765013952684701104' 
I1206 09:43:14.611449  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30d f30798ab '') 
I1206 09:43:14.611677  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.611762  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.612003  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.612082  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.612403  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:43:14.612434  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.612744  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 2 0:'test-1765013952684701104' 
I1206 09:43:14.612787  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rwalk tag 0 (20fa30d f30798ab '') 
I1206 09:43:14.613072  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tstat tag 0 fid 2
I1206 09:43:14.613260  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rstat tag 0 st ('test-1765013952684701104' 'jenkins' 'balintp' '' q (20fa30d f30798ab '') m 644 at 0 mt 1765013952 l 24 t 0 d 0 ext )
I1206 09:43:14.613627  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 2
I1206 09:43:14.613658  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.613938  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tread tag 0 fid 1 offset 258 count 262120
I1206 09:43:14.613975  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rread tag 0 count 0
I1206 09:43:14.614263  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 1
I1206 09:43:14.614305  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.617151  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1206 09:43:14.617211  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rerror tag 0 ename 'file not found' ecode 0
I1206 09:43:14.778356  408214 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.122:53614 Tclunk tag 0 fid 0
I1206 09:43:14.778414  408214 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.122:53614 Rclunk tag 0
I1206 09:43:14.778966  408214 main.go:127] stdlog: ufs.go:147 disconnected
I1206 09:43:14.797381  408214 out.go:179] * Unmounting /mount-9p ...
I1206 09:43:14.798992  408214 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 09:43:14.808773  408214 mount.go:180] unmount for /mount-9p ran successfully
I1206 09:43:14.808923  408214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/.mount-process: {Name:mk423c886b5a42e7eb084886d1f1dd8d69b1d394 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 09:43:14.810910  408214 out.go:203] 
W1206 09:43:14.812423  408214 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1206 09:43:14.813642  408214 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (242.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 service --namespace=default --https --url hello-node: exit status 115 (312.680712ms)

                                                
                                                
-- stdout --
	https://192.168.39.122:31599
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-959292 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 service hello-node --url --format={{.IP}}: exit status 115 (303.558595ms)

                                                
                                                
-- stdout --
	192.168.39.122
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-959292 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 service hello-node --url: exit status 115 (304.502989ms)

                                                
                                                
-- stdout --
	http://192.168.39.122:31599
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-959292 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.122:31599
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestPreload (146.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-996504 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1206 10:28:02.365636  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-996504 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m32.538499979s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-996504 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-996504 image pull gcr.io/k8s-minikube/busybox: (3.534474795s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-996504
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-996504: (6.775103474s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-996504 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-996504 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (41.060540447s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-996504 image list
E1206 10:29:04.546665  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-06 10:29:04.641583624 +0000 UTC m=+4648.050825510
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-996504 -n test-preload-996504
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-996504 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-996504 logs -n 25: (1.0389949s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-777422 ssh -n multinode-777422-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:15 UTC │
	│ ssh     │ multinode-777422 ssh -n multinode-777422 sudo cat /home/docker/cp-test_multinode-777422-m03_multinode-777422.txt                                          │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:15 UTC │
	│ cp      │ multinode-777422 cp multinode-777422-m03:/home/docker/cp-test.txt multinode-777422-m02:/home/docker/cp-test_multinode-777422-m03_multinode-777422-m02.txt │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:15 UTC │
	│ ssh     │ multinode-777422 ssh -n multinode-777422-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:15 UTC │
	│ ssh     │ multinode-777422 ssh -n multinode-777422-m02 sudo cat /home/docker/cp-test_multinode-777422-m03_multinode-777422-m02.txt                                  │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:15 UTC │
	│ node    │ multinode-777422 node stop m03                                                                                                                            │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:15 UTC │
	│ node    │ multinode-777422 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:15 UTC │ 06 Dec 25 10:16 UTC │
	│ node    │ list -p multinode-777422                                                                                                                                  │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │                     │
	│ stop    │ -p multinode-777422                                                                                                                                       │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:16 UTC │ 06 Dec 25 10:19 UTC │
	│ start   │ -p multinode-777422 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:19 UTC │ 06 Dec 25 10:21 UTC │
	│ node    │ list -p multinode-777422                                                                                                                                  │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │                     │
	│ node    │ multinode-777422 node delete m03                                                                                                                          │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:21 UTC │
	│ stop    │ multinode-777422 stop                                                                                                                                     │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:21 UTC │ 06 Dec 25 10:24 UTC │
	│ start   │ -p multinode-777422 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:24 UTC │ 06 Dec 25 10:26 UTC │
	│ node    │ list -p multinode-777422                                                                                                                                  │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │                     │
	│ start   │ -p multinode-777422-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-777422-m02 │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │                     │
	│ start   │ -p multinode-777422-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-777422-m03 │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │ 06 Dec 25 10:26 UTC │
	│ node    │ add -p multinode-777422                                                                                                                                   │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │                     │
	│ delete  │ -p multinode-777422-m03                                                                                                                                   │ multinode-777422-m03 │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │ 06 Dec 25 10:26 UTC │
	│ delete  │ -p multinode-777422                                                                                                                                       │ multinode-777422     │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │ 06 Dec 25 10:26 UTC │
	│ start   │ -p test-preload-996504 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-996504  │ jenkins │ v1.37.0 │ 06 Dec 25 10:26 UTC │ 06 Dec 25 10:28 UTC │
	│ image   │ test-preload-996504 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-996504  │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │ 06 Dec 25 10:28 UTC │
	│ stop    │ -p test-preload-996504                                                                                                                                    │ test-preload-996504  │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │ 06 Dec 25 10:28 UTC │
	│ start   │ -p test-preload-996504 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-996504  │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │ 06 Dec 25 10:29 UTC │
	│ image   │ test-preload-996504 image list                                                                                                                            │ test-preload-996504  │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │ 06 Dec 25 10:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:28:23
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:28:23.435955  427470 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:28:23.436117  427470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:23.436123  427470 out.go:374] Setting ErrFile to fd 2...
	I1206 10:28:23.436130  427470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:23.436619  427470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:28:23.437137  427470 out.go:368] Setting JSON to false
	I1206 10:28:23.438052  427470 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7843,"bootTime":1765009060,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 10:28:23.438126  427470 start.go:143] virtualization: kvm guest
	I1206 10:28:23.440213  427470 out.go:179] * [test-preload-996504] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 10:28:23.441746  427470 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:28:23.441774  427470 notify.go:221] Checking for updates...
	I1206 10:28:23.444556  427470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:28:23.445952  427470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:28:23.447376  427470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:28:23.449028  427470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 10:28:23.450661  427470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:28:23.452693  427470 config.go:182] Loaded profile config "test-preload-996504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:23.453228  427470 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:28:23.490848  427470 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 10:28:23.492490  427470 start.go:309] selected driver: kvm2
	I1206 10:28:23.492528  427470 start.go:927] validating driver "kvm2" against &{Name:test-preload-996504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-996504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:28:23.492633  427470 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:28:23.493747  427470 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:28:23.493785  427470 cni.go:84] Creating CNI manager for ""
	I1206 10:28:23.493846  427470 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:28:23.493903  427470 start.go:353] cluster config:
	{Name:test-preload-996504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-996504 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:28:23.493994  427470 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:28:23.495699  427470 out.go:179] * Starting "test-preload-996504" primary control-plane node in "test-preload-996504" cluster
	I1206 10:28:23.496931  427470 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:28:23.496967  427470 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 10:28:23.496982  427470 cache.go:65] Caching tarball of preloaded images
	I1206 10:28:23.497089  427470 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 10:28:23.497103  427470 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 10:28:23.497199  427470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/config.json ...
	I1206 10:28:23.497402  427470 start.go:360] acquireMachinesLock for test-preload-996504: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 10:28:23.497449  427470 start.go:364] duration metric: took 26.241µs to acquireMachinesLock for "test-preload-996504"
	I1206 10:28:23.497499  427470 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:28:23.497508  427470 fix.go:54] fixHost starting: 
	I1206 10:28:23.499361  427470 fix.go:112] recreateIfNeeded on test-preload-996504: state=Stopped err=<nil>
	W1206 10:28:23.499383  427470 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:28:23.501305  427470 out.go:252] * Restarting existing kvm2 VM for "test-preload-996504" ...
	I1206 10:28:23.501396  427470 main.go:143] libmachine: starting domain...
	I1206 10:28:23.501412  427470 main.go:143] libmachine: ensuring networks are active...
	I1206 10:28:23.502258  427470 main.go:143] libmachine: Ensuring network default is active
	I1206 10:28:23.502663  427470 main.go:143] libmachine: Ensuring network mk-test-preload-996504 is active
	I1206 10:28:23.503130  427470 main.go:143] libmachine: getting domain XML...
	I1206 10:28:23.504337  427470 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-996504</name>
	  <uuid>fa851289-ef6c-4a80-96ed-3fe396c166ac</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/test-preload-996504.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:e6:e1:94'/>
	      <source network='mk-test-preload-996504'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:56:37:6d'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 10:28:24.805947  427470 main.go:143] libmachine: waiting for domain to start...
	I1206 10:28:24.807282  427470 main.go:143] libmachine: domain is now running
	I1206 10:28:24.807300  427470 main.go:143] libmachine: waiting for IP...
	I1206 10:28:24.808127  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:24.808689  427470 main.go:143] libmachine: domain test-preload-996504 has current primary IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:24.808707  427470 main.go:143] libmachine: found domain IP: 192.168.39.41
	I1206 10:28:24.808733  427470 main.go:143] libmachine: reserving static IP address...
	I1206 10:28:24.809191  427470 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-996504", mac: "52:54:00:e6:e1:94", ip: "192.168.39.41"} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:26:55 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:24.809218  427470 main.go:143] libmachine: skip adding static IP to network mk-test-preload-996504 - found existing host DHCP lease matching {name: "test-preload-996504", mac: "52:54:00:e6:e1:94", ip: "192.168.39.41"}
	I1206 10:28:24.809227  427470 main.go:143] libmachine: reserved static IP address 192.168.39.41 for domain test-preload-996504
	I1206 10:28:24.809232  427470 main.go:143] libmachine: waiting for SSH...
	I1206 10:28:24.809238  427470 main.go:143] libmachine: Getting to WaitForSSH function...
	I1206 10:28:24.811618  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:24.812067  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:26:55 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:24.812095  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:24.812283  427470 main.go:143] libmachine: Using SSH client type: native
	I1206 10:28:24.812513  427470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1206 10:28:24.812522  427470 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1206 10:28:27.914029  427470 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.41:22: connect: no route to host
	I1206 10:28:33.994025  427470 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.41:22: connect: no route to host
	I1206 10:28:37.116345  427470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:28:37.121240  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.121887  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.121929  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.122281  427470 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/config.json ...
	I1206 10:28:37.122519  427470 machine.go:94] provisionDockerMachine start ...
	I1206 10:28:37.125400  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.125910  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.125951  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.126133  427470 main.go:143] libmachine: Using SSH client type: native
	I1206 10:28:37.126358  427470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1206 10:28:37.126368  427470 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:28:37.255584  427470 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1206 10:28:37.255617  427470 buildroot.go:166] provisioning hostname "test-preload-996504"
	I1206 10:28:37.259117  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.259578  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.259603  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.259817  427470 main.go:143] libmachine: Using SSH client type: native
	I1206 10:28:37.260029  427470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1206 10:28:37.260042  427470 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-996504 && echo "test-preload-996504" | sudo tee /etc/hostname
	I1206 10:28:37.401685  427470 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-996504
	
	I1206 10:28:37.404940  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.405320  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.405346  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.405497  427470 main.go:143] libmachine: Using SSH client type: native
	I1206 10:28:37.405778  427470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1206 10:28:37.405797  427470 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-996504' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-996504/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-996504' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:28:37.533213  427470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:28:37.533253  427470 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 10:28:37.533281  427470 buildroot.go:174] setting up certificates
	I1206 10:28:37.533294  427470 provision.go:84] configureAuth start
	I1206 10:28:37.536859  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.537374  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.537402  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.539765  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.540217  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.540245  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.540374  427470 provision.go:143] copyHostCerts
	I1206 10:28:37.540434  427470 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 10:28:37.540457  427470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 10:28:37.540555  427470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 10:28:37.540688  427470 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 10:28:37.540703  427470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 10:28:37.540764  427470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 10:28:37.540855  427470 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 10:28:37.540865  427470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 10:28:37.540905  427470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 10:28:37.540989  427470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.test-preload-996504 san=[127.0.0.1 192.168.39.41 localhost minikube test-preload-996504]
	I1206 10:28:37.655420  427470 provision.go:177] copyRemoteCerts
	I1206 10:28:37.655495  427470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:28:37.659327  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.659963  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.660011  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.660236  427470 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/id_rsa Username:docker}
	I1206 10:28:37.751123  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:28:37.781684  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1206 10:28:37.811661  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:28:37.842099  427470 provision.go:87] duration metric: took 308.747008ms to configureAuth
	I1206 10:28:37.842136  427470 buildroot.go:189] setting minikube options for container-runtime
	I1206 10:28:37.842319  427470 config.go:182] Loaded profile config "test-preload-996504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:37.845430  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.846047  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:37.846077  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:37.846346  427470 main.go:143] libmachine: Using SSH client type: native
	I1206 10:28:37.846578  427470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1206 10:28:37.846606  427470 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:28:38.104019  427470 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:28:38.104053  427470 machine.go:97] duration metric: took 981.505689ms to provisionDockerMachine
	I1206 10:28:38.104065  427470 start.go:293] postStartSetup for "test-preload-996504" (driver="kvm2")
	I1206 10:28:38.104078  427470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:28:38.104138  427470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:28:38.107318  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.107846  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:38.107878  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.108073  427470 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/id_rsa Username:docker}
	I1206 10:28:38.205507  427470 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:28:38.210790  427470 info.go:137] Remote host: Buildroot 2025.02
	I1206 10:28:38.210826  427470 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 10:28:38.210914  427470 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 10:28:38.211015  427470 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 10:28:38.211140  427470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:28:38.223243  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 10:28:38.255731  427470 start.go:296] duration metric: took 151.629323ms for postStartSetup
	I1206 10:28:38.255786  427470 fix.go:56] duration metric: took 14.758277202s for fixHost
	I1206 10:28:38.258784  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.259367  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:38.259411  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.259627  427470 main.go:143] libmachine: Using SSH client type: native
	I1206 10:28:38.259909  427470 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1206 10:28:38.259922  427470 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 10:28:38.376034  427470 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765016918.329114038
	
	I1206 10:28:38.376064  427470 fix.go:216] guest clock: 1765016918.329114038
	I1206 10:28:38.376086  427470 fix.go:229] Guest: 2025-12-06 10:28:38.329114038 +0000 UTC Remote: 2025-12-06 10:28:38.25580469 +0000 UTC m=+14.870840261 (delta=73.309348ms)
	I1206 10:28:38.376107  427470 fix.go:200] guest clock delta is within tolerance: 73.309348ms
	I1206 10:28:38.376112  427470 start.go:83] releasing machines lock for "test-preload-996504", held for 14.87862128s
	I1206 10:28:38.379128  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.379743  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:38.379798  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.380341  427470 ssh_runner.go:195] Run: cat /version.json
	I1206 10:28:38.380476  427470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:28:38.383275  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.383749  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:38.383787  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.383834  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.383956  427470 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/id_rsa Username:docker}
	I1206 10:28:38.384381  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:38.384432  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:38.384650  427470 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/id_rsa Username:docker}
	I1206 10:28:38.470475  427470 ssh_runner.go:195] Run: systemctl --version
	I1206 10:28:38.504600  427470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:28:38.651797  427470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:28:38.658811  427470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:28:38.658921  427470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:28:38.680523  427470 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 10:28:38.680566  427470 start.go:496] detecting cgroup driver to use...
	I1206 10:28:38.680639  427470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:28:38.699387  427470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:28:38.717447  427470 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:28:38.717518  427470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:28:38.736423  427470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:28:38.753668  427470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:28:38.900478  427470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:28:39.112100  427470 docker.go:234] disabling docker service ...
	I1206 10:28:39.112183  427470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:28:39.129043  427470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:28:39.143905  427470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:28:39.305167  427470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:28:39.450236  427470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:28:39.465621  427470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:28:39.488562  427470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:28:39.488630  427470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.501199  427470 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:28:39.501273  427470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.513703  427470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.526520  427470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.539120  427470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:28:39.552116  427470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.565473  427470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.586685  427470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:28:39.599347  427470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:28:39.610835  427470 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1206 10:28:39.610910  427470 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1206 10:28:39.633886  427470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:28:39.646525  427470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:28:39.793280  427470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:28:39.904594  427470 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:28:39.904675  427470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:28:39.910251  427470 start.go:564] Will wait 60s for crictl version
	I1206 10:28:39.910322  427470 ssh_runner.go:195] Run: which crictl
	I1206 10:28:39.914987  427470 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 10:28:39.951846  427470 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 10:28:39.951977  427470 ssh_runner.go:195] Run: crio --version
	I1206 10:28:39.981426  427470 ssh_runner.go:195] Run: crio --version
	I1206 10:28:40.013202  427470 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1206 10:28:40.018008  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:40.018445  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:40.018475  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:40.018752  427470 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 10:28:40.023594  427470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:28:40.040955  427470 kubeadm.go:884] updating cluster {Name:test-preload-996504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-996504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:28:40.041115  427470 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:28:40.041177  427470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:28:40.076826  427470 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1206 10:28:40.076898  427470 ssh_runner.go:195] Run: which lz4
	I1206 10:28:40.081314  427470 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 10:28:40.086301  427470 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 10:28:40.086351  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1206 10:28:41.386074  427470 crio.go:462] duration metric: took 1.304788799s to copy over tarball
	I1206 10:28:41.386176  427470 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 10:28:42.860204  427470 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.473995334s)
	I1206 10:28:42.860242  427470 crio.go:469] duration metric: took 1.474127235s to extract the tarball
	I1206 10:28:42.860250  427470 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 10:28:42.896923  427470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:28:42.935677  427470 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:28:42.935704  427470 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:28:42.935729  427470 kubeadm.go:935] updating node { 192.168.39.41 8443 v1.34.2 crio true true} ...
	I1206 10:28:42.935871  427470 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-996504 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-996504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:28:42.935955  427470 ssh_runner.go:195] Run: crio config
	I1206 10:28:42.984403  427470 cni.go:84] Creating CNI manager for ""
	I1206 10:28:42.984428  427470 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:28:42.984447  427470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:28:42.984468  427470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-996504 NodeName:test-preload-996504 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:28:42.984614  427470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-996504"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:28:42.984688  427470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:28:42.997029  427470 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:28:42.997107  427470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:28:43.009129  427470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1206 10:28:43.030665  427470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:28:43.052235  427470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1206 10:28:43.073804  427470 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1206 10:28:43.078380  427470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:28:43.094203  427470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:28:43.241676  427470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:28:43.262531  427470 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504 for IP: 192.168.39.41
	I1206 10:28:43.262558  427470 certs.go:195] generating shared ca certs ...
	I1206 10:28:43.262590  427470 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:28:43.262825  427470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 10:28:43.262881  427470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 10:28:43.262894  427470 certs.go:257] generating profile certs ...
	I1206 10:28:43.262982  427470 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.key
	I1206 10:28:43.263053  427470 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/apiserver.key.3e304632
	I1206 10:28:43.263101  427470 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/proxy-client.key
	I1206 10:28:43.263212  427470 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 10:28:43.263242  427470 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 10:28:43.263250  427470 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:28:43.263276  427470 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:28:43.263302  427470 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:28:43.263328  427470 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 10:28:43.263371  427470 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 10:28:43.264198  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:28:43.308467  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:28:43.343992  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:28:43.374220  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:28:43.403734  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1206 10:28:43.434812  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:28:43.466277  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:28:43.497561  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 10:28:43.529593  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 10:28:43.560900  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:28:43.591455  427470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 10:28:43.623155  427470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:28:43.646264  427470 ssh_runner.go:195] Run: openssl version
	I1206 10:28:43.653529  427470 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:28:43.667104  427470 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:28:43.680234  427470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:28:43.686110  427470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:28:43.686195  427470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:28:43.694670  427470 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:28:43.707285  427470 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:28:43.720167  427470 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 10:28:43.732605  427470 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 10:28:43.744659  427470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 10:28:43.749770  427470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 10:28:43.749833  427470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 10:28:43.756846  427470 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:28:43.768660  427470 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/396534.pem /etc/ssl/certs/51391683.0
	I1206 10:28:43.780836  427470 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 10:28:43.792340  427470 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 10:28:43.804384  427470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 10:28:43.810168  427470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 10:28:43.810239  427470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 10:28:43.817581  427470 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:28:43.829372  427470 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3965342.pem /etc/ssl/certs/3ec20f2e.0
	I1206 10:28:43.841917  427470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:28:43.847297  427470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:28:43.854679  427470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:28:43.861990  427470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:28:43.869421  427470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:28:43.877518  427470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:28:43.884850  427470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:28:43.892333  427470 kubeadm.go:401] StartCluster: {Name:test-preload-996504 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-996504 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:28:43.892447  427470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:28:43.892527  427470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:28:43.926730  427470 cri.go:89] found id: ""
	I1206 10:28:43.926817  427470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:28:43.939675  427470 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:28:43.939704  427470 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:28:43.939810  427470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:28:43.954474  427470 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:28:43.955021  427470 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-996504" does not appear in /home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:28:43.955200  427470 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-392561/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-996504" cluster setting kubeconfig missing "test-preload-996504" context setting]
	I1206 10:28:43.955616  427470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/kubeconfig: {Name:mkde56684c6f903767a9ec1254dd48fbeb8e8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:28:43.956441  427470 kapi.go:59] client config for test-preload-996504: &rest.Config{Host:"https://192.168.39.41:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.key", CAFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:28:43.957107  427470 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:28:43.957130  427470 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:28:43.957137  427470 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:28:43.957145  427470 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:28:43.957151  427470 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:28:43.957693  427470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:28:43.970171  427470 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.41
	I1206 10:28:43.970215  427470 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:28:43.970231  427470 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:28:43.970302  427470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:28:44.009995  427470 cri.go:89] found id: ""
	I1206 10:28:44.010075  427470 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:28:44.031621  427470 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:28:44.044077  427470 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:28:44.044098  427470 kubeadm.go:158] found existing configuration files:
	
	I1206 10:28:44.044155  427470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:28:44.055262  427470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:28:44.055350  427470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:28:44.067653  427470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:28:44.078845  427470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:28:44.078922  427470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:28:44.091034  427470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:28:44.104796  427470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:28:44.104858  427470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:28:44.119677  427470 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:28:44.133593  427470 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:28:44.133666  427470 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:28:44.146128  427470 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:28:44.161670  427470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:28:44.223173  427470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:28:45.405293  427470 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.182082101s)
	I1206 10:28:45.405371  427470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:28:45.673248  427470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:28:45.740838  427470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:28:45.837454  427470 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:28:45.837580  427470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:28:46.338129  427470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:28:46.837947  427470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:28:47.338366  427470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:28:47.838547  427470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:28:47.884874  427470 api_server.go:72] duration metric: took 2.047440221s to wait for apiserver process to appear ...
	I1206 10:28:47.884910  427470 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:28:47.884935  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:47.885576  427470 api_server.go:269] stopped: https://192.168.39.41:8443/healthz: Get "https://192.168.39.41:8443/healthz": dial tcp 192.168.39.41:8443: connect: connection refused
	I1206 10:28:48.385268  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:50.741621  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 10:28:50.741653  427470 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 10:28:50.741669  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:50.828553  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 10:28:50.828591  427470 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 10:28:50.885962  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:50.898882  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 10:28:50.898914  427470 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 10:28:51.385695  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:51.390653  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 10:28:51.390679  427470 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 10:28:51.885362  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:51.893861  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 10:28:51.893896  427470 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 10:28:52.385540  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:52.391220  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1206 10:28:52.399694  427470 api_server.go:141] control plane version: v1.34.2
	I1206 10:28:52.399748  427470 api_server.go:131] duration metric: took 4.514831209s to wait for apiserver health ...
	I1206 10:28:52.399759  427470 cni.go:84] Creating CNI manager for ""
	I1206 10:28:52.399765  427470 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:28:52.401680  427470 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 10:28:52.403002  427470 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 10:28:52.416815  427470 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 10:28:52.446966  427470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:28:52.452816  427470 system_pods.go:59] 7 kube-system pods found
	I1206 10:28:52.452857  427470 system_pods.go:61] "coredns-66bc5c9577-m7xxz" [b106b432-9071-4c2e-b5b8-4852c2b10584] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:28:52.452867  427470 system_pods.go:61] "etcd-test-preload-996504" [54a3ffe8-34bc-4c48-b27a-11617eb6a607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 10:28:52.452879  427470 system_pods.go:61] "kube-apiserver-test-preload-996504" [b8864b8d-c969-4624-9a3e-9730d62fbbe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:28:52.452887  427470 system_pods.go:61] "kube-controller-manager-test-preload-996504" [d2ea18be-bf70-4917-8bfc-163ef51c6313] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:28:52.452911  427470 system_pods.go:61] "kube-proxy-t2nw7" [495927b9-b002-4c19-ae7f-70a3bbbf5063] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1206 10:28:52.452928  427470 system_pods.go:61] "kube-scheduler-test-preload-996504" [7e6af7ad-a8bc-4237-a36b-e733324e534e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 10:28:52.452937  427470 system_pods.go:61] "storage-provisioner" [fe3084f4-b72d-4bc9-b6ff-a85833f09ae6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:28:52.452946  427470 system_pods.go:74] duration metric: took 5.954169ms to wait for pod list to return data ...
	I1206 10:28:52.452960  427470 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:28:52.457904  427470 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 10:28:52.457941  427470 node_conditions.go:123] node cpu capacity is 2
	I1206 10:28:52.457962  427470 node_conditions.go:105] duration metric: took 4.997101ms to run NodePressure ...
	I1206 10:28:52.458024  427470 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:28:52.736602  427470 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1206 10:28:52.740959  427470 kubeadm.go:744] kubelet initialised
	I1206 10:28:52.740988  427470 kubeadm.go:745] duration metric: took 4.354322ms waiting for restarted kubelet to initialise ...
	I1206 10:28:52.741010  427470 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:28:52.760280  427470 ops.go:34] apiserver oom_adj: -16
	I1206 10:28:52.760320  427470 kubeadm.go:602] duration metric: took 8.820607038s to restartPrimaryControlPlane
	I1206 10:28:52.760334  427470 kubeadm.go:403] duration metric: took 8.868012225s to StartCluster
	I1206 10:28:52.760379  427470 settings.go:142] acquiring lock: {Name:mk6aea9c06de6b4df1ec2e5d18bffa62e8a405af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:28:52.760514  427470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:28:52.761564  427470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/kubeconfig: {Name:mkde56684c6f903767a9ec1254dd48fbeb8e8b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:28:52.761944  427470 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:28:52.762166  427470 config.go:182] Loaded profile config "test-preload-996504": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:52.762119  427470 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:28:52.762217  427470 addons.go:70] Setting storage-provisioner=true in profile "test-preload-996504"
	I1206 10:28:52.762242  427470 addons.go:239] Setting addon storage-provisioner=true in "test-preload-996504"
	W1206 10:28:52.762254  427470 addons.go:248] addon storage-provisioner should already be in state true
	I1206 10:28:52.762263  427470 addons.go:70] Setting default-storageclass=true in profile "test-preload-996504"
	I1206 10:28:52.762284  427470 host.go:66] Checking if "test-preload-996504" exists ...
	I1206 10:28:52.762302  427470 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-996504"
	I1206 10:28:52.764300  427470 out.go:179] * Verifying Kubernetes components...
	I1206 10:28:52.765670  427470 kapi.go:59] client config for test-preload-996504: &rest.Config{Host:"https://192.168.39.41:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.key", CAFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:28:52.765977  427470 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:28:52.766026  427470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:28:52.766088  427470 addons.go:239] Setting addon default-storageclass=true in "test-preload-996504"
	W1206 10:28:52.766115  427470 addons.go:248] addon default-storageclass should already be in state true
	I1206 10:28:52.766145  427470 host.go:66] Checking if "test-preload-996504" exists ...
	I1206 10:28:52.767381  427470 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:28:52.767405  427470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:28:52.768249  427470 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:28:52.768270  427470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:28:52.771045  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:52.771305  427470 main.go:143] libmachine: domain test-preload-996504 has defined MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:52.771561  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:52.771597  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:52.771810  427470 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/id_rsa Username:docker}
	I1206 10:28:52.771984  427470 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e6:e1:94", ip: ""} in network mk-test-preload-996504: {Iface:virbr1 ExpiryTime:2025-12-06 11:28:35 +0000 UTC Type:0 Mac:52:54:00:e6:e1:94 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:test-preload-996504 Clientid:01:52:54:00:e6:e1:94}
	I1206 10:28:52.772024  427470 main.go:143] libmachine: domain test-preload-996504 has defined IP address 192.168.39.41 and MAC address 52:54:00:e6:e1:94 in network mk-test-preload-996504
	I1206 10:28:52.772227  427470 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/test-preload-996504/id_rsa Username:docker}
	I1206 10:28:53.092913  427470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:28:53.125173  427470 node_ready.go:35] waiting up to 6m0s for node "test-preload-996504" to be "Ready" ...
	I1206 10:28:53.129112  427470 node_ready.go:49] node "test-preload-996504" is "Ready"
	I1206 10:28:53.129140  427470 node_ready.go:38] duration metric: took 3.931851ms for node "test-preload-996504" to be "Ready" ...
	I1206 10:28:53.129153  427470 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:28:53.129211  427470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:28:53.163641  427470 api_server.go:72] duration metric: took 401.63922ms to wait for apiserver process to appear ...
	I1206 10:28:53.163677  427470 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:28:53.163748  427470 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1206 10:28:53.170284  427470 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1206 10:28:53.171240  427470 api_server.go:141] control plane version: v1.34.2
	I1206 10:28:53.171271  427470 api_server.go:131] duration metric: took 7.585814ms to wait for apiserver health ...
	I1206 10:28:53.171284  427470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:28:53.176657  427470 system_pods.go:59] 7 kube-system pods found
	I1206 10:28:53.176689  427470 system_pods.go:61] "coredns-66bc5c9577-m7xxz" [b106b432-9071-4c2e-b5b8-4852c2b10584] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:28:53.176696  427470 system_pods.go:61] "etcd-test-preload-996504" [54a3ffe8-34bc-4c48-b27a-11617eb6a607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 10:28:53.176704  427470 system_pods.go:61] "kube-apiserver-test-preload-996504" [b8864b8d-c969-4624-9a3e-9730d62fbbe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:28:53.176757  427470 system_pods.go:61] "kube-controller-manager-test-preload-996504" [d2ea18be-bf70-4917-8bfc-163ef51c6313] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:28:53.176769  427470 system_pods.go:61] "kube-proxy-t2nw7" [495927b9-b002-4c19-ae7f-70a3bbbf5063] Running
	I1206 10:28:53.176780  427470 system_pods.go:61] "kube-scheduler-test-preload-996504" [7e6af7ad-a8bc-4237-a36b-e733324e534e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 10:28:53.176787  427470 system_pods.go:61] "storage-provisioner" [fe3084f4-b72d-4bc9-b6ff-a85833f09ae6] Running
	I1206 10:28:53.176794  427470 system_pods.go:74] duration metric: took 5.503811ms to wait for pod list to return data ...
	I1206 10:28:53.176801  427470 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:28:53.179787  427470 default_sa.go:45] found service account: "default"
	I1206 10:28:53.179812  427470 default_sa.go:55] duration metric: took 3.004242ms for default service account to be created ...
	I1206 10:28:53.179821  427470 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:28:53.182241  427470 system_pods.go:86] 7 kube-system pods found
	I1206 10:28:53.182269  427470 system_pods.go:89] "coredns-66bc5c9577-m7xxz" [b106b432-9071-4c2e-b5b8-4852c2b10584] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:28:53.182276  427470 system_pods.go:89] "etcd-test-preload-996504" [54a3ffe8-34bc-4c48-b27a-11617eb6a607] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 10:28:53.182284  427470 system_pods.go:89] "kube-apiserver-test-preload-996504" [b8864b8d-c969-4624-9a3e-9730d62fbbe7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 10:28:53.182289  427470 system_pods.go:89] "kube-controller-manager-test-preload-996504" [d2ea18be-bf70-4917-8bfc-163ef51c6313] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 10:28:53.182294  427470 system_pods.go:89] "kube-proxy-t2nw7" [495927b9-b002-4c19-ae7f-70a3bbbf5063] Running
	I1206 10:28:53.182299  427470 system_pods.go:89] "kube-scheduler-test-preload-996504" [7e6af7ad-a8bc-4237-a36b-e733324e534e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 10:28:53.182304  427470 system_pods.go:89] "storage-provisioner" [fe3084f4-b72d-4bc9-b6ff-a85833f09ae6] Running
	I1206 10:28:53.182311  427470 system_pods.go:126] duration metric: took 2.484873ms to wait for k8s-apps to be running ...
	I1206 10:28:53.182318  427470 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:28:53.182367  427470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:28:53.196966  427470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:28:53.205251  427470 system_svc.go:56] duration metric: took 22.922758ms WaitForService to wait for kubelet
	I1206 10:28:53.205287  427470 kubeadm.go:587] duration metric: took 443.294973ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:28:53.205314  427470 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:28:53.210299  427470 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1206 10:28:53.210336  427470 node_conditions.go:123] node cpu capacity is 2
	I1206 10:28:53.210356  427470 node_conditions.go:105] duration metric: took 5.035609ms to run NodePressure ...
	I1206 10:28:53.210375  427470 start.go:242] waiting for startup goroutines ...
	I1206 10:28:53.475546  427470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:28:54.163849  427470 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1206 10:28:54.165239  427470 addons.go:530] duration metric: took 1.403125447s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1206 10:28:54.165288  427470 start.go:247] waiting for cluster config update ...
	I1206 10:28:54.165300  427470 start.go:256] writing updated cluster config ...
	I1206 10:28:54.165572  427470 ssh_runner.go:195] Run: rm -f paused
	I1206 10:28:54.171107  427470 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:28:54.171640  427470 kapi.go:59] client config for test-preload-996504: &rest.Config{Host:"https://192.168.39.41:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/profiles/test-preload-996504/client.key", CAFile:"/home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:28:54.175620  427470 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-m7xxz" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 10:28:56.183748  427470 pod_ready.go:104] pod "coredns-66bc5c9577-m7xxz" is not "Ready", error: <nil>
	W1206 10:28:58.682001  427470 pod_ready.go:104] pod "coredns-66bc5c9577-m7xxz" is not "Ready", error: <nil>
	W1206 10:29:00.682158  427470 pod_ready.go:104] pod "coredns-66bc5c9577-m7xxz" is not "Ready", error: <nil>
	I1206 10:29:02.182043  427470 pod_ready.go:94] pod "coredns-66bc5c9577-m7xxz" is "Ready"
	I1206 10:29:02.182069  427470 pod_ready.go:86] duration metric: took 8.006416598s for pod "coredns-66bc5c9577-m7xxz" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:02.184465  427470 pod_ready.go:83] waiting for pod "etcd-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.191421  427470 pod_ready.go:94] pod "etcd-test-preload-996504" is "Ready"
	I1206 10:29:03.191454  427470 pod_ready.go:86] duration metric: took 1.006966076s for pod "etcd-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.194794  427470 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.199277  427470 pod_ready.go:94] pod "kube-apiserver-test-preload-996504" is "Ready"
	I1206 10:29:03.199304  427470 pod_ready.go:86] duration metric: took 4.484782ms for pod "kube-apiserver-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.201696  427470 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.206680  427470 pod_ready.go:94] pod "kube-controller-manager-test-preload-996504" is "Ready"
	I1206 10:29:03.206720  427470 pod_ready.go:86] duration metric: took 4.972416ms for pod "kube-controller-manager-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.379413  427470 pod_ready.go:83] waiting for pod "kube-proxy-t2nw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.780558  427470 pod_ready.go:94] pod "kube-proxy-t2nw7" is "Ready"
	I1206 10:29:03.780593  427470 pod_ready.go:86] duration metric: took 401.155334ms for pod "kube-proxy-t2nw7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:03.980184  427470 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:04.379117  427470 pod_ready.go:94] pod "kube-scheduler-test-preload-996504" is "Ready"
	I1206 10:29:04.379148  427470 pod_ready.go:86] duration metric: took 398.935157ms for pod "kube-scheduler-test-preload-996504" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:29:04.379166  427470 pod_ready.go:40] duration metric: took 10.208020651s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:29:04.424722  427470 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 10:29:04.426918  427470 out.go:179] * Done! kubectl is now configured to use "test-preload-996504" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.230563269Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765016945230288325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d1d5d6d-a5e7-43e8-89f1-78206aff4ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.231473233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aa5eb36-a6a6-4795-a19b-a59972a33d06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.231619607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aa5eb36-a6a6-4795-a19b-a59972a33d06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.231801005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc5ffbe0116ea60e632bce5e63f8dfff03b970bcaa5828519dfc9e3a22655d73,PodSandboxId:b13af3f66ea9660039a4e49893e84cf337f62bc05111c75db1250dc7445eefbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765016935832880514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m7xxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b106b432-9071-4c2e-b5b8-4852c2b10584,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4133bf37e877a47ce7c91710b63ee9a218afd545595bf42ed9acdfc17c230738,PodSandboxId:54729486475a58db4731d385e8291683fcbe5e498cd5cc902ceb62b369bff225,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765016932219614647,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t2nw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495927b9-b002-4c19-ae7f-70a3bbbf5063,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854c2acf83ae6c1e873945707b042da7b3c69dc3d8acd90a564cc03363cb3a9f,PodSandboxId:4915cef8f2e58081869c0efb917571f5871c6e378e485a3a3c43a26b593f1997,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765016932225126288,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3084f4-b72d-4bc9-b6ff-a85833f09ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea36f3116534554aebc5c4fe59e8905cb8ab3c6db505565147359d96a2fc338,PodSandboxId:bbd73a7ef31bdef3277a57c08c016e67b5280802b65b7912ab609ba0ebe483fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765016927592847365,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b4f5b62f12e57f6e3d1c66d9854e90,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3786d083621af5da703d47b0f7b691de0a9f180fccc500df89c16736d1d6249c,PodSandboxId:0b464351b3e796488f0614459e1cd64650ae99c4ceae131e0907fb37c8c93bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765016927576083661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07a743ba49b70f1204facae0b3406012,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0c2f0aa9f2b2374ad97fe5e61202f3da5d23f58f669c533cb1d6bfcb595b48,PodSandboxId:52cfcce4601cea2e2f2a3c6a53b816a482a2ae67bf339316d1ea4b7b3b09a7c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765016927562865968,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f6f7415f79cdc4fabb6ca4b05a7d10,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1c44c30c3d2afc5597d11b675b05ad8a7f03efa490a6101f402c258d96ed38,PodSandboxId:7abfae4d8e5f6f2b52d1d86ce9e030c747286886e763c09f63f2e5eb734d149d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765016927571754934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33347f55a7bed14a3a90ed775ec53fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aa5eb36-a6a6-4795-a19b-a59972a33d06 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.271527824Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e2d5e45-2d12-45a0-a61e-555ee6f24319 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.271666793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e2d5e45-2d12-45a0-a61e-555ee6f24319 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.272858824Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb307029-3d87-4fef-b720-0616b3b0f251 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.274075543Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765016945274048430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb307029-3d87-4fef-b720-0616b3b0f251 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.275376374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e078cd7-c95a-44e4-bd81-b1c223590c49 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.275572773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e078cd7-c95a-44e4-bd81-b1c223590c49 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.275772048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc5ffbe0116ea60e632bce5e63f8dfff03b970bcaa5828519dfc9e3a22655d73,PodSandboxId:b13af3f66ea9660039a4e49893e84cf337f62bc05111c75db1250dc7445eefbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765016935832880514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m7xxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b106b432-9071-4c2e-b5b8-4852c2b10584,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4133bf37e877a47ce7c91710b63ee9a218afd545595bf42ed9acdfc17c230738,PodSandboxId:54729486475a58db4731d385e8291683fcbe5e498cd5cc902ceb62b369bff225,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765016932219614647,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t2nw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495927b9-b002-4c19-ae7f-70a3bbbf5063,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854c2acf83ae6c1e873945707b042da7b3c69dc3d8acd90a564cc03363cb3a9f,PodSandboxId:4915cef8f2e58081869c0efb917571f5871c6e378e485a3a3c43a26b593f1997,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765016932225126288,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3084f4-b72d-4bc9-b6ff-a85833f09ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea36f3116534554aebc5c4fe59e8905cb8ab3c6db505565147359d96a2fc338,PodSandboxId:bbd73a7ef31bdef3277a57c08c016e67b5280802b65b7912ab609ba0ebe483fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765016927592847365,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b4f5b62f12e57f6e3d1c66d9854e90,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3786d083621af5da703d47b0f7b691de0a9f180fccc500df89c16736d1d6249c,PodSandboxId:0b464351b3e796488f0614459e1cd64650ae99c4ceae131e0907fb37c8c93bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765016927576083661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07a743ba49b70f1204facae0b3406012,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0c2f0aa9f2b2374ad97fe5e61202f3da5d23f58f669c533cb1d6bfcb595b48,PodSandboxId:52cfcce4601cea2e2f2a3c6a53b816a482a2ae67bf339316d1ea4b7b3b09a7c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765016927562865968,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f6f7415f79cdc4fabb6ca4b05a7d10,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1c44c30c3d2afc5597d11b675b05ad8a7f03efa490a6101f402c258d96ed38,PodSandboxId:7abfae4d8e5f6f2b52d1d86ce9e030c747286886e763c09f63f2e5eb734d149d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765016927571754934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33347f55a7bed14a3a90ed775ec53fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e078cd7-c95a-44e4-bd81-b1c223590c49 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.311653491Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3ce374c-676a-4eca-8ad6-f2e5bbdc5d9c name=/runtime.v1.RuntimeService/Version
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.311768615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3ce374c-676a-4eca-8ad6-f2e5bbdc5d9c name=/runtime.v1.RuntimeService/Version
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.313210628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7069e51-4df2-4acc-a886-42646cd9f384 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.314013456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765016945313990217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7069e51-4df2-4acc-a886-42646cd9f384 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.314701759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d916affd-aa99-4861-afb5-59808919b1c6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.314753550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d916affd-aa99-4861-afb5-59808919b1c6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.314904397Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc5ffbe0116ea60e632bce5e63f8dfff03b970bcaa5828519dfc9e3a22655d73,PodSandboxId:b13af3f66ea9660039a4e49893e84cf337f62bc05111c75db1250dc7445eefbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765016935832880514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m7xxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b106b432-9071-4c2e-b5b8-4852c2b10584,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4133bf37e877a47ce7c91710b63ee9a218afd545595bf42ed9acdfc17c230738,PodSandboxId:54729486475a58db4731d385e8291683fcbe5e498cd5cc902ceb62b369bff225,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765016932219614647,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t2nw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495927b9-b002-4c19-ae7f-70a3bbbf5063,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854c2acf83ae6c1e873945707b042da7b3c69dc3d8acd90a564cc03363cb3a9f,PodSandboxId:4915cef8f2e58081869c0efb917571f5871c6e378e485a3a3c43a26b593f1997,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765016932225126288,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3084f4-b72d-4bc9-b6ff-a85833f09ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea36f3116534554aebc5c4fe59e8905cb8ab3c6db505565147359d96a2fc338,PodSandboxId:bbd73a7ef31bdef3277a57c08c016e67b5280802b65b7912ab609ba0ebe483fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765016927592847365,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b4f5b62f12e57f6e3d1c66d9854e90,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3786d083621af5da703d47b0f7b691de0a9f180fccc500df89c16736d1d6249c,PodSandboxId:0b464351b3e796488f0614459e1cd64650ae99c4ceae131e0907fb37c8c93bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765016927576083661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07a743ba49b70f1204facae0b3406012,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0c2f0aa9f2b2374ad97fe5e61202f3da5d23f58f669c533cb1d6bfcb595b48,PodSandboxId:52cfcce4601cea2e2f2a3c6a53b816a482a2ae67bf339316d1ea4b7b3b09a7c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765016927562865968,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f6f7415f79cdc4fabb6ca4b05a7d10,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1c44c30c3d2afc5597d11b675b05ad8a7f03efa490a6101f402c258d96ed38,PodSandboxId:7abfae4d8e5f6f2b52d1d86ce9e030c747286886e763c09f63f2e5eb734d149d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765016927571754934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33347f55a7bed14a3a90ed775ec53fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d916affd-aa99-4861-afb5-59808919b1c6 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.344798109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcc38870-08b9-47b7-b940-2288ff824d8b name=/runtime.v1.RuntimeService/Version
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.344873450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcc38870-08b9-47b7-b940-2288ff824d8b name=/runtime.v1.RuntimeService/Version
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.346452006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cef980a7-698c-4395-888c-27e42285d3a0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.346914907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765016945346891871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cef980a7-698c-4395-888c-27e42285d3a0 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.348220785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22e3372e-aaac-498b-9a57-b4d349ffd630 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.348327484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22e3372e-aaac-498b-9a57-b4d349ffd630 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:29:05 test-preload-996504 crio[834]: time="2025-12-06 10:29:05.348698629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc5ffbe0116ea60e632bce5e63f8dfff03b970bcaa5828519dfc9e3a22655d73,PodSandboxId:b13af3f66ea9660039a4e49893e84cf337f62bc05111c75db1250dc7445eefbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765016935832880514,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-m7xxz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b106b432-9071-4c2e-b5b8-4852c2b10584,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4133bf37e877a47ce7c91710b63ee9a218afd545595bf42ed9acdfc17c230738,PodSandboxId:54729486475a58db4731d385e8291683fcbe5e498cd5cc902ceb62b369bff225,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765016932219614647,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-t2nw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 495927b9-b002-4c19-ae7f-70a3bbbf5063,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:854c2acf83ae6c1e873945707b042da7b3c69dc3d8acd90a564cc03363cb3a9f,PodSandboxId:4915cef8f2e58081869c0efb917571f5871c6e378e485a3a3c43a26b593f1997,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765016932225126288,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe3084f4-b72d-4bc9-b6ff-a85833f09ae6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea36f3116534554aebc5c4fe59e8905cb8ab3c6db505565147359d96a2fc338,PodSandboxId:bbd73a7ef31bdef3277a57c08c016e67b5280802b65b7912ab609ba0ebe483fe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765016927592847365,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97b4f5b62f12e57f6e3d1c66d9854e90,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3786d083621af5da703d47b0f7b691de0a9f180fccc500df89c16736d1d6249c,PodSandboxId:0b464351b3e796488f0614459e1cd64650ae99c4ceae131e0907fb37c8c93bd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765016927576083661,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07a743ba49b70f1204facae0b3406012,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e0c2f0aa9f2b2374ad97fe5e61202f3da5d23f58f669c533cb1d6bfcb595b48,PodSandboxId:52cfcce4601cea2e2f2a3c6a53b816a482a2ae67bf339316d1ea4b7b3b09a7c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765016927562865968,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7f6f7415f79cdc4fabb6ca4b05a7d10,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab1c44c30c3d2afc5597d11b675b05ad8a7f03efa490a6101f402c258d96ed38,PodSandboxId:7abfae4d8e5f6f2b52d1d86ce9e030c747286886e763c09f63f2e5eb734d149d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765016927571754934,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-996504,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33347f55a7bed14a3a90ed775ec53fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22e3372e-aaac-498b-9a57-b4d349ffd630 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	dc5ffbe0116ea       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago       Running             coredns                   1                   b13af3f66ea96       coredns-66bc5c9577-m7xxz                      kube-system
	854c2acf83ae6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   4915cef8f2e58       storage-provisioner                           kube-system
	4133bf37e877a       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   13 seconds ago      Running             kube-proxy                1                   54729486475a5       kube-proxy-t2nw7                              kube-system
	1ea36f3116534       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   17 seconds ago      Running             kube-scheduler            1                   bbd73a7ef31bd       kube-scheduler-test-preload-996504            kube-system
	3786d083621af       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   17 seconds ago      Running             etcd                      1                   0b464351b3e79       etcd-test-preload-996504                      kube-system
	ab1c44c30c3d2       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   17 seconds ago      Running             kube-controller-manager   1                   7abfae4d8e5f6       kube-controller-manager-test-preload-996504   kube-system
	3e0c2f0aa9f2b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   17 seconds ago      Running             kube-apiserver            1                   52cfcce4601ce       kube-apiserver-test-preload-996504            kube-system
	
	
	==> coredns [dc5ffbe0116ea60e632bce5e63f8dfff03b970bcaa5828519dfc9e3a22655d73] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56083 - 49359 "HINFO IN 6580122541055914959.7236496005860842915. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021944966s
	
	
	==> describe nodes <==
	Name:               test-preload-996504
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-996504
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=test-preload-996504
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_27_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:27:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-996504
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:29:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:27:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:27:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:27:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:28:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    test-preload-996504
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa851289ef6c4a8096ed3fe396c166ac
	  System UUID:                fa851289-ef6c-4a80-96ed-3fe396c166ac
	  Boot ID:                    15acc4e0-16f5-4ec7-9703-8b2294754133
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-m7xxz                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     92s
	  kube-system                 etcd-test-preload-996504                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         99s
	  kube-system                 kube-apiserver-test-preload-996504             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-test-preload-996504    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-t2nw7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-test-preload-996504             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 90s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   NodeHasSufficientMemory  97s                kubelet          Node test-preload-996504 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    97s                kubelet          Node test-preload-996504 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s                kubelet          Node test-preload-996504 status is now: NodeHasSufficientPID
	  Normal   Starting                 97s                kubelet          Starting kubelet.
	  Normal   NodeReady                96s                kubelet          Node test-preload-996504 status is now: NodeReady
	  Normal   RegisteredNode           93s                node-controller  Node test-preload-996504 event: Registered Node test-preload-996504 in Controller
	  Normal   Starting                 20s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node test-preload-996504 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node test-preload-996504 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node test-preload-996504 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-996504 has been rebooted, boot id: 15acc4e0-16f5-4ec7-9703-8b2294754133
	  Normal   RegisteredNode           11s                node-controller  Node test-preload-996504 event: Registered Node test-preload-996504 in Controller
	
	
	==> dmesg <==
	[Dec 6 10:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000522] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004078] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.939857] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.105784] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.635558] kauditd_printk_skb: 196 callbacks suppressed
	[Dec 6 10:29] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [3786d083621af5da703d47b0f7b691de0a9f180fccc500df89c16736d1d6249c] <==
	{"level":"warn","ts":"2025-12-06T10:28:49.609843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.618688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.667441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.684064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.690888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.703792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.729730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.741813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.758608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.780541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.813361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.837331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.862646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.876744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.923625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.940853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.958574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.961734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.971353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.983364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:49.992826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:50.003358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:50.013186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:50.023793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:28:50.073021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45454","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:29:05 up 0 min,  0 users,  load average: 1.09, 0.27, 0.09
	Linux test-preload-996504 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3e0c2f0aa9f2b2374ad97fe5e61202f3da5d23f58f669c533cb1d6bfcb595b48] <==
	I1206 10:28:50.867396       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 10:28:50.868029       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 10:28:50.868040       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 10:28:50.871113       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 10:28:50.871303       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 10:28:50.877395       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 10:28:50.882278       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 10:28:50.882320       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 10:28:50.882434       1 aggregator.go:171] initial CRD sync complete...
	I1206 10:28:50.882592       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 10:28:50.882614       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 10:28:50.882691       1 cache.go:39] Caches are synced for autoregister controller
	I1206 10:28:50.882524       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1206 10:28:50.886149       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 10:28:50.893470       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 10:28:50.901852       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 10:28:51.674735       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 10:28:51.773772       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 10:28:52.537702       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 10:28:52.594610       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 10:28:52.636085       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 10:28:52.652419       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 10:28:54.334506       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 10:28:54.372209       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 10:28:54.517552       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [ab1c44c30c3d2afc5597d11b675b05ad8a7f03efa490a6101f402c258d96ed38] <==
	I1206 10:28:54.052231       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 10:28:54.052398       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 10:28:54.052508       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 10:28:54.056717       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 10:28:54.061180       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 10:28:54.062193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1206 10:28:54.062458       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1206 10:28:54.064825       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1206 10:28:54.065037       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 10:28:54.065130       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 10:28:54.065051       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1206 10:28:54.065063       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 10:28:54.065717       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 10:28:54.068344       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 10:28:54.065063       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 10:28:54.070945       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 10:28:54.074349       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 10:28:54.075064       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 10:28:54.076695       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 10:28:54.082936       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 10:28:54.087178       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 10:28:54.139980       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:28:54.164440       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:28:54.164469       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 10:28:54.164476       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [4133bf37e877a47ce7c91710b63ee9a218afd545595bf42ed9acdfc17c230738] <==
	I1206 10:28:52.448741       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:28:52.550602       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:28:52.550675       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.41"]
	E1206 10:28:52.550765       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:28:52.624550       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 10:28:52.624612       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 10:28:52.624642       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:28:52.645520       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:28:52.645981       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:28:52.646021       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:28:52.652448       1 config.go:200] "Starting service config controller"
	I1206 10:28:52.652552       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:28:52.652644       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:28:52.652702       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:28:52.652727       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:28:52.652741       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:28:52.654826       1 config.go:309] "Starting node config controller"
	I1206 10:28:52.655469       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:28:52.655511       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:28:52.753969       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 10:28:52.753988       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:28:52.754021       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1ea36f3116534554aebc5c4fe59e8905cb8ab3c6db505565147359d96a2fc338] <==
	I1206 10:28:49.189142       1 serving.go:386] Generated self-signed cert in-memory
	W1206 10:28:50.713025       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 10:28:50.713049       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 10:28:50.713187       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 10:28:50.713198       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 10:28:50.801667       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 10:28:50.808913       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:28:50.815759       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 10:28:50.815840       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 10:28:50.817529       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 10:28:50.817608       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 10:28:50.916990       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:28:50 test-preload-996504 kubelet[1188]: I1206 10:28:50.928887    1188 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 10:28:50 test-preload-996504 kubelet[1188]: E1206 10:28:50.930072    1188 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-996504\" already exists" pod="kube-system/etcd-test-preload-996504"
	Dec 06 10:28:50 test-preload-996504 kubelet[1188]: I1206 10:28:50.930171    1188 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 10:28:50 test-preload-996504 kubelet[1188]: I1206 10:28:50.931195    1188 setters.go:543] "Node became not ready" node="test-preload-996504" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T10:28:50Z","lastTransitionTime":"2025-12-06T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: I1206 10:28:51.309767    1188 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-996504"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: E1206 10:28:51.320673    1188 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-996504\" already exists" pod="kube-system/kube-controller-manager-test-preload-996504"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: I1206 10:28:51.709593    1188 apiserver.go:52] "Watching apiserver"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: E1206 10:28:51.715772    1188 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-m7xxz" podUID="b106b432-9071-4c2e-b5b8-4852c2b10584"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: I1206 10:28:51.736754    1188 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: I1206 10:28:51.767291    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/495927b9-b002-4c19-ae7f-70a3bbbf5063-xtables-lock\") pod \"kube-proxy-t2nw7\" (UID: \"495927b9-b002-4c19-ae7f-70a3bbbf5063\") " pod="kube-system/kube-proxy-t2nw7"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: I1206 10:28:51.767352    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/495927b9-b002-4c19-ae7f-70a3bbbf5063-lib-modules\") pod \"kube-proxy-t2nw7\" (UID: \"495927b9-b002-4c19-ae7f-70a3bbbf5063\") " pod="kube-system/kube-proxy-t2nw7"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: I1206 10:28:51.767370    1188 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fe3084f4-b72d-4bc9-b6ff-a85833f09ae6-tmp\") pod \"storage-provisioner\" (UID: \"fe3084f4-b72d-4bc9-b6ff-a85833f09ae6\") " pod="kube-system/storage-provisioner"
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: E1206 10:28:51.767992    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 10:28:51 test-preload-996504 kubelet[1188]: E1206 10:28:51.768110    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b106b432-9071-4c2e-b5b8-4852c2b10584-config-volume podName:b106b432-9071-4c2e-b5b8-4852c2b10584 nodeName:}" failed. No retries permitted until 2025-12-06 10:28:52.26809024 +0000 UTC m=+6.648340336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b106b432-9071-4c2e-b5b8-4852c2b10584-config-volume") pod "coredns-66bc5c9577-m7xxz" (UID: "b106b432-9071-4c2e-b5b8-4852c2b10584") : object "kube-system"/"coredns" not registered
	Dec 06 10:28:52 test-preload-996504 kubelet[1188]: E1206 10:28:52.272157    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 10:28:52 test-preload-996504 kubelet[1188]: E1206 10:28:52.272226    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b106b432-9071-4c2e-b5b8-4852c2b10584-config-volume podName:b106b432-9071-4c2e-b5b8-4852c2b10584 nodeName:}" failed. No retries permitted until 2025-12-06 10:28:53.272213052 +0000 UTC m=+7.652463148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b106b432-9071-4c2e-b5b8-4852c2b10584-config-volume") pod "coredns-66bc5c9577-m7xxz" (UID: "b106b432-9071-4c2e-b5b8-4852c2b10584") : object "kube-system"/"coredns" not registered
	Dec 06 10:28:52 test-preload-996504 kubelet[1188]: I1206 10:28:52.946691    1188 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 06 10:28:53 test-preload-996504 kubelet[1188]: E1206 10:28:53.281817    1188 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 06 10:28:53 test-preload-996504 kubelet[1188]: E1206 10:28:53.281903    1188 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b106b432-9071-4c2e-b5b8-4852c2b10584-config-volume podName:b106b432-9071-4c2e-b5b8-4852c2b10584 nodeName:}" failed. No retries permitted until 2025-12-06 10:28:55.281889363 +0000 UTC m=+9.662139459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b106b432-9071-4c2e-b5b8-4852c2b10584-config-volume") pod "coredns-66bc5c9577-m7xxz" (UID: "b106b432-9071-4c2e-b5b8-4852c2b10584") : object "kube-system"/"coredns" not registered
	Dec 06 10:28:55 test-preload-996504 kubelet[1188]: E1206 10:28:55.783871    1188 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765016935782897375 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 06 10:28:55 test-preload-996504 kubelet[1188]: E1206 10:28:55.783921    1188 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765016935782897375 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 06 10:28:56 test-preload-996504 kubelet[1188]: I1206 10:28:56.958204    1188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 06 10:29:01 test-preload-996504 kubelet[1188]: I1206 10:29:01.946619    1188 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 06 10:29:05 test-preload-996504 kubelet[1188]: E1206 10:29:05.786309    1188 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765016945784875364 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 06 10:29:05 test-preload-996504 kubelet[1188]: E1206 10:29:05.787148    1188 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765016945784875364 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [854c2acf83ae6c1e873945707b042da7b3c69dc3d8acd90a564cc03363cb3a9f] <==
	I1206 10:28:52.323231       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-996504 -n test-preload-996504
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-996504 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-996504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-996504
--- FAIL: TestPreload (146.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (58.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-672164 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-672164 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.930825967s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-672164] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-672164" primary control-plane node in "pause-672164" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-672164" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:35:50.106377  434530 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:35:50.106801  434530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:50.106814  434530 out.go:374] Setting ErrFile to fd 2...
	I1206 10:35:50.106819  434530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:35:50.107187  434530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:35:50.107974  434530 out.go:368] Setting JSON to false
	I1206 10:35:50.109510  434530 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8290,"bootTime":1765009060,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 10:35:50.109598  434530 start.go:143] virtualization: kvm guest
	I1206 10:35:50.111735  434530 out.go:179] * [pause-672164] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 10:35:50.113826  434530 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:35:50.113893  434530 notify.go:221] Checking for updates...
	I1206 10:35:50.117225  434530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:35:50.118786  434530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:35:50.120409  434530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:35:50.121872  434530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 10:35:50.123238  434530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:35:50.125391  434530 config.go:182] Loaded profile config "pause-672164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:35:50.126087  434530 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:35:50.179487  434530 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 10:35:50.182152  434530 start.go:309] selected driver: kvm2
	I1206 10:35:50.182182  434530 start.go:927] validating driver "kvm2" against &{Name:pause-672164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-672164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:50.182338  434530 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:35:50.183752  434530 cni.go:84] Creating CNI manager for ""
	I1206 10:35:50.183867  434530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:35:50.183953  434530 start.go:353] cluster config:
	{Name:pause-672164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-672164 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:35:50.184125  434530 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:35:50.185865  434530 out.go:179] * Starting "pause-672164" primary control-plane node in "pause-672164" cluster
	I1206 10:35:50.186875  434530 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:35:50.186925  434530 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 10:35:50.186941  434530 cache.go:65] Caching tarball of preloaded images
	I1206 10:35:50.187057  434530 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 10:35:50.187074  434530 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 10:35:50.187267  434530 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/config.json ...
	I1206 10:35:50.188111  434530 start.go:360] acquireMachinesLock for pause-672164: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 10:35:50.188188  434530 start.go:364] duration metric: took 46.677µs to acquireMachinesLock for "pause-672164"
	I1206 10:35:50.188212  434530 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:35:50.188223  434530 fix.go:54] fixHost starting: 
	I1206 10:35:50.190581  434530 fix.go:112] recreateIfNeeded on pause-672164: state=Running err=<nil>
	W1206 10:35:50.190610  434530 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:35:50.191952  434530 out.go:252] * Updating the running kvm2 "pause-672164" VM ...
	I1206 10:35:50.191992  434530 machine.go:94] provisionDockerMachine start ...
	I1206 10:35:50.195318  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.195904  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.195935  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.196141  434530 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:50.196413  434530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1206 10:35:50.196426  434530 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:35:50.329277  434530 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-672164
	
	I1206 10:35:50.329311  434530 buildroot.go:166] provisioning hostname "pause-672164"
	I1206 10:35:50.332903  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.333510  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.333551  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.333890  434530 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:50.334223  434530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1206 10:35:50.334249  434530 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-672164 && echo "pause-672164" | sudo tee /etc/hostname
	I1206 10:35:50.485998  434530 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-672164
	
	I1206 10:35:50.489844  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.490439  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.490492  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.490814  434530 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:50.491074  434530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1206 10:35:50.491093  434530 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-672164' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-672164/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-672164' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:35:50.622469  434530 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:35:50.622504  434530 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22047-392561/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-392561/.minikube}
	I1206 10:35:50.622532  434530 buildroot.go:174] setting up certificates
	I1206 10:35:50.622545  434530 provision.go:84] configureAuth start
	I1206 10:35:50.626211  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.626690  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.626757  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.629976  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.630656  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.630704  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.631042  434530 provision.go:143] copyHostCerts
	I1206 10:35:50.631108  434530 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem, removing ...
	I1206 10:35:50.631116  434530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem
	I1206 10:35:50.631178  434530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/ca.pem (1082 bytes)
	I1206 10:35:50.631293  434530 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem, removing ...
	I1206 10:35:50.631302  434530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem
	I1206 10:35:50.631329  434530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/cert.pem (1123 bytes)
	I1206 10:35:50.631398  434530 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem, removing ...
	I1206 10:35:50.631406  434530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem
	I1206 10:35:50.631431  434530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-392561/.minikube/key.pem (1679 bytes)
	I1206 10:35:50.631493  434530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem org=jenkins.pause-672164 san=[127.0.0.1 192.168.39.24 localhost minikube pause-672164]
	I1206 10:35:50.733262  434530 provision.go:177] copyRemoteCerts
	I1206 10:35:50.733332  434530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:35:50.736184  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.736692  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.736744  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.736937  434530 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/pause-672164/id_rsa Username:docker}
	I1206 10:35:50.838261  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:35:50.877945  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 10:35:50.925768  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:35:50.965627  434530 provision.go:87] duration metric: took 343.064983ms to configureAuth
	I1206 10:35:50.965663  434530 buildroot.go:189] setting minikube options for container-runtime
	I1206 10:35:50.965992  434530 config.go:182] Loaded profile config "pause-672164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:35:50.969998  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.970533  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:50.970588  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:50.970898  434530 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:50.971220  434530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1206 10:35:50.971253  434530 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:35:56.547258  434530 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:35:56.547294  434530 machine.go:97] duration metric: took 6.355292589s to provisionDockerMachine
	I1206 10:35:56.547310  434530 start.go:293] postStartSetup for "pause-672164" (driver="kvm2")
	I1206 10:35:56.547323  434530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:35:56.547387  434530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:35:56.550813  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.551294  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:56.551322  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.551510  434530 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/pause-672164/id_rsa Username:docker}
	I1206 10:35:56.642074  434530 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:35:56.647784  434530 info.go:137] Remote host: Buildroot 2025.02
	I1206 10:35:56.647818  434530 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/addons for local assets ...
	I1206 10:35:56.647924  434530 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-392561/.minikube/files for local assets ...
	I1206 10:35:56.648034  434530 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem -> 3965342.pem in /etc/ssl/certs
	I1206 10:35:56.648165  434530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 10:35:56.660982  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 10:35:56.694041  434530 start.go:296] duration metric: took 146.713846ms for postStartSetup
	I1206 10:35:56.694087  434530 fix.go:56] duration metric: took 6.50586545s for fixHost
	I1206 10:35:56.697350  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.697849  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:56.697875  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.698061  434530 main.go:143] libmachine: Using SSH client type: native
	I1206 10:35:56.698285  434530 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.24 22 <nil> <nil>}
	I1206 10:35:56.698296  434530 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1206 10:35:56.819982  434530 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765017356.813821213
	
	I1206 10:35:56.820008  434530 fix.go:216] guest clock: 1765017356.813821213
	I1206 10:35:56.820018  434530 fix.go:229] Guest: 2025-12-06 10:35:56.813821213 +0000 UTC Remote: 2025-12-06 10:35:56.694090958 +0000 UTC m=+6.654861978 (delta=119.730255ms)
	I1206 10:35:56.820040  434530 fix.go:200] guest clock delta is within tolerance: 119.730255ms
	I1206 10:35:56.820047  434530 start.go:83] releasing machines lock for "pause-672164", held for 6.631845841s
	I1206 10:35:56.823590  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.824380  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:56.824427  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.825193  434530 ssh_runner.go:195] Run: cat /version.json
	I1206 10:35:56.825254  434530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:35:56.828572  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.828954  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.828969  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:56.829000  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.829153  434530 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/pause-672164/id_rsa Username:docker}
	I1206 10:35:56.829447  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:56.829478  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:56.829649  434530 sshutil.go:53] new ssh client: &{IP:192.168.39.24 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/pause-672164/id_rsa Username:docker}
	I1206 10:35:56.952163  434530 ssh_runner.go:195] Run: systemctl --version
	I1206 10:35:56.958811  434530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:35:57.110648  434530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:35:57.122910  434530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:35:57.123027  434530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:35:57.135920  434530 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:35:57.135949  434530 start.go:496] detecting cgroup driver to use...
	I1206 10:35:57.136031  434530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:35:57.158014  434530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:35:57.177114  434530 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:35:57.177197  434530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:35:57.197726  434530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:35:57.216912  434530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:35:57.408410  434530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:35:57.596877  434530 docker.go:234] disabling docker service ...
	I1206 10:35:57.596953  434530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:35:57.630561  434530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:35:57.650470  434530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:35:57.860494  434530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:35:58.065835  434530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:35:58.086221  434530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:35:58.120206  434530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:35:58.120276  434530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.136922  434530 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:35:58.137158  434530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.151958  434530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.166186  434530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.180644  434530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:35:58.196523  434530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.210865  434530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.227885  434530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:35:58.246635  434530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:35:58.259318  434530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:35:58.272382  434530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:58.506606  434530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:35:58.768967  434530 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:35:58.769065  434530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:35:58.776320  434530 start.go:564] Will wait 60s for crictl version
	I1206 10:35:58.776397  434530 ssh_runner.go:195] Run: which crictl
	I1206 10:35:58.781561  434530 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1206 10:35:58.816233  434530 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1206 10:35:58.816347  434530 ssh_runner.go:195] Run: crio --version
	I1206 10:35:58.858092  434530 ssh_runner.go:195] Run: crio --version
	I1206 10:35:58.897536  434530 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1206 10:35:58.902826  434530 main.go:143] libmachine: domain pause-672164 has defined MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:58.903341  434530 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b0:41:2d", ip: ""} in network mk-pause-672164: {Iface:virbr1 ExpiryTime:2025-12-06 11:34:48 +0000 UTC Type:0 Mac:52:54:00:b0:41:2d Iaid: IPaddr:192.168.39.24 Prefix:24 Hostname:pause-672164 Clientid:01:52:54:00:b0:41:2d}
	I1206 10:35:58.903375  434530 main.go:143] libmachine: domain pause-672164 has defined IP address 192.168.39.24 and MAC address 52:54:00:b0:41:2d in network mk-pause-672164
	I1206 10:35:58.903591  434530 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1206 10:35:58.910284  434530 kubeadm.go:884] updating cluster {Name:pause-672164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-672164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:35:58.910531  434530 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:35:58.910601  434530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:58.966369  434530 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:58.966401  434530 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:35:58.966497  434530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:35:59.002782  434530 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:35:59.002808  434530 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:35:59.002817  434530 kubeadm.go:935] updating node { 192.168.39.24 8443 v1.34.2 crio true true} ...
	I1206 10:35:59.002973  434530 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-672164 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.24
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-672164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:35:59.003079  434530 ssh_runner.go:195] Run: crio config
	I1206 10:35:59.065576  434530 cni.go:84] Creating CNI manager for ""
	I1206 10:35:59.065604  434530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:35:59.065624  434530 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:35:59.065669  434530 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.24 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-672164 NodeName:pause-672164 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.24"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.24 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:35:59.065852  434530 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.24
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-672164"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.24"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.24"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:35:59.065943  434530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:35:59.079000  434530 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:35:59.079073  434530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:35:59.092136  434530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1206 10:35:59.119506  434530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:35:59.151215  434530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1206 10:35:59.179387  434530 ssh_runner.go:195] Run: grep 192.168.39.24	control-plane.minikube.internal$ /etc/hosts
	I1206 10:35:59.184873  434530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:35:59.379900  434530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:35:59.402082  434530 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164 for IP: 192.168.39.24
	I1206 10:35:59.402110  434530 certs.go:195] generating shared ca certs ...
	I1206 10:35:59.402153  434530 certs.go:227] acquiring lock for ca certs: {Name:mk3de97d1b446a24abef5e763ff5edd1f090afa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:35:59.402340  434530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key
	I1206 10:35:59.402433  434530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key
	I1206 10:35:59.402471  434530 certs.go:257] generating profile certs ...
	I1206 10:35:59.402592  434530 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/client.key
	I1206 10:35:59.402672  434530 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/apiserver.key.9ff902c7
	I1206 10:35:59.402752  434530 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/proxy-client.key
	I1206 10:35:59.402930  434530 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem (1338 bytes)
	W1206 10:35:59.402977  434530 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534_empty.pem, impossibly tiny 0 bytes
	I1206 10:35:59.402992  434530 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:35:59.403028  434530 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:35:59.403074  434530 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:35:59.403125  434530 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/certs/key.pem (1679 bytes)
	I1206 10:35:59.403190  434530 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem (1708 bytes)
	I1206 10:35:59.404009  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:35:59.442828  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:35:59.475884  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:35:59.511751  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1206 10:35:59.545604  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:35:59.580640  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 10:35:59.615153  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:35:59.655259  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/pause-672164/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:35:59.692998  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:35:59.729177  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/certs/396534.pem --> /usr/share/ca-certificates/396534.pem (1338 bytes)
	I1206 10:35:59.763890  434530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/ssl/certs/3965342.pem --> /usr/share/ca-certificates/3965342.pem (1708 bytes)
	I1206 10:35:59.800038  434530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:35:59.825084  434530 ssh_runner.go:195] Run: openssl version
	I1206 10:35:59.832250  434530 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:59.847183  434530 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:35:59.863017  434530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:59.869313  434530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:12 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:59.869401  434530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:35:59.877611  434530 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:35:59.896322  434530 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/396534.pem
	I1206 10:35:59.910336  434530 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/396534.pem /etc/ssl/certs/396534.pem
	I1206 10:35:59.927343  434530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/396534.pem
	I1206 10:35:59.933900  434530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 09:24 /usr/share/ca-certificates/396534.pem
	I1206 10:35:59.933983  434530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/396534.pem
	I1206 10:35:59.948834  434530 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:36:00.026876  434530 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3965342.pem
	I1206 10:36:00.062374  434530 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3965342.pem /etc/ssl/certs/3965342.pem
	I1206 10:36:00.079203  434530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3965342.pem
	I1206 10:36:00.088841  434530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 09:24 /usr/share/ca-certificates/3965342.pem
	I1206 10:36:00.088951  434530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3965342.pem
	I1206 10:36:00.101312  434530 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:36:00.125619  434530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:36:00.140989  434530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:36:00.165006  434530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:36:00.192554  434530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:36:00.215735  434530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:36:00.237845  434530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:36:00.251672  434530 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:36:00.265984  434530 kubeadm.go:401] StartCluster: {Name:pause-672164 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-672164 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.24 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:00.266145  434530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:36:00.266239  434530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:36:00.403795  434530 cri.go:89] found id: "6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712"
	I1206 10:36:00.403841  434530 cri.go:89] found id: "2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7"
	I1206 10:36:00.403849  434530 cri.go:89] found id: "90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861"
	I1206 10:36:00.403853  434530 cri.go:89] found id: "2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82"
	I1206 10:36:00.403858  434530 cri.go:89] found id: "3b72595d8ec32ea94f7194a2cc8d334b54641dacccaefb38b586d6c757e6be2a"
	I1206 10:36:00.403865  434530 cri.go:89] found id: "98a0a2c568448fa7f6eee4c1fe4763b77743878a4003beddf9d51b6a8b4d66e8"
	I1206 10:36:00.403871  434530 cri.go:89] found id: ""
	I1206 10:36:00.403944  434530 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-672164 -n pause-672164
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-672164 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-672164 logs -n 25: (1.435526352s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-777177 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo containerd config dump                                                                                                                                                                                                │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo crio config                                                                                                                                                                                                           │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ delete  │ -p cilium-777177                                                                                                                                                                                                                            │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ start   │ -p guest-968200 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-968200              │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ ssh     │ -p NoKubernetes-012243 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-012243       │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ delete  │ -p NoKubernetes-012243                                                                                                                                                                                                                      │ NoKubernetes-012243       │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ start   │ -p pause-672164 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-672164              │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:35 UTC │
	│ delete  │ -p force-systemd-env-294790                                                                                                                                                                                                                 │ force-systemd-env-294790  │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ start   │ -p cert-expiration-694719 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-694719    │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:35 UTC │
	│ start   │ -p force-systemd-flag-524307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524307 │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:35 UTC │
	│ start   │ -p pause-672164 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-672164              │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ force-systemd-flag-524307 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-524307 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ delete  │ -p force-systemd-flag-524307                                                                                                                                                                                                                │ force-systemd-flag-524307 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ start   │ -p cert-options-322688 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ cert-options-322688 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ -p cert-options-322688 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete  │ -p cert-options-322688                                                                                                                                                                                                                      │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start   │ -p old-k8s-version-147016 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-147016    │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:36:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:36:37.950552  435050 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:36:37.950757  435050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:37.950772  435050 out.go:374] Setting ErrFile to fd 2...
	I1206 10:36:37.950778  435050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:37.951005  435050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:36:37.951546  435050 out.go:368] Setting JSON to false
	I1206 10:36:37.952686  435050 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8338,"bootTime":1765009060,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 10:36:37.952763  435050 start.go:143] virtualization: kvm guest
	I1206 10:36:37.958452  435050 out.go:179] * [old-k8s-version-147016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 10:36:37.960162  435050 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:36:37.960163  435050 notify.go:221] Checking for updates...
	I1206 10:36:37.963052  435050 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:36:37.964619  435050 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:36:37.965977  435050 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:36:37.967477  435050 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 10:36:37.969003  435050 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:36:37.971120  435050 config.go:182] Loaded profile config "cert-expiration-694719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:36:37.971260  435050 config.go:182] Loaded profile config "guest-968200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 10:36:37.971488  435050 config.go:182] Loaded profile config "pause-672164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:36:37.971637  435050 config.go:182] Loaded profile config "running-upgrade-976040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 10:36:37.971849  435050 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:36:38.016887  435050 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 10:36:38.018315  435050 start.go:309] selected driver: kvm2
	I1206 10:36:38.018337  435050 start.go:927] validating driver "kvm2" against <nil>
	I1206 10:36:38.018353  435050 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:36:38.019177  435050 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:36:38.019469  435050 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:36:38.019510  435050 cni.go:84] Creating CNI manager for ""
	I1206 10:36:38.019567  435050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:36:38.019578  435050 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 10:36:38.019638  435050 start.go:353] cluster config:
	{Name:old-k8s-version-147016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-147016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:38.019793  435050 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:36:38.022353  435050 out.go:179] * Starting "old-k8s-version-147016" primary control-plane node in "old-k8s-version-147016" cluster
	I1206 10:36:38.023973  435050 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:36:38.024014  435050 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 10:36:38.024024  435050 cache.go:65] Caching tarball of preloaded images
	I1206 10:36:38.024166  435050 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 10:36:38.024180  435050 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1206 10:36:38.024279  435050 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/config.json ...
	I1206 10:36:38.024300  435050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/config.json: {Name:mkb22350b7c5e8da0bc592e69a175cfc7cd0671e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:38.024467  435050 start.go:360] acquireMachinesLock for old-k8s-version-147016: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 10:36:38.024525  435050 start.go:364] duration metric: took 31.6µs to acquireMachinesLock for "old-k8s-version-147016"
	I1206 10:36:38.024563  435050 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-147016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-147016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:36:38.024621  435050 start.go:125] createHost starting for "" (driver="kvm2")
	W1206 10:36:36.702991  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	W1206 10:36:38.703144  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	I1206 10:36:37.068161  430005 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I1206 10:36:37.068894  430005 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": dial tcp 192.168.72.144:8443: connect: connection refused
	I1206 10:36:37.068959  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:37.069020  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:37.112794  430005 cri.go:89] found id: "ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:37.112828  430005 cri.go:89] found id: ""
	I1206 10:36:37.112839  430005 logs.go:282] 1 containers: [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35]
	I1206 10:36:37.112909  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.117579  430005 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:37.117654  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:37.161855  430005 cri.go:89] found id: "c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:37.161882  430005 cri.go:89] found id: ""
	I1206 10:36:37.161892  430005 logs.go:282] 1 containers: [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10]
	I1206 10:36:37.161953  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.166914  430005 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:37.166993  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:37.217170  430005 cri.go:89] found id: "1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:37.217196  430005 cri.go:89] found id: ""
	I1206 10:36:37.217206  430005 logs.go:282] 1 containers: [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659]
	I1206 10:36:37.217269  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.221935  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:37.222004  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:37.261787  430005 cri.go:89] found id: "c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:37.261812  430005 cri.go:89] found id: "056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:37.261816  430005 cri.go:89] found id: ""
	I1206 10:36:37.261825  430005 logs.go:282] 2 containers: [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd]
	I1206 10:36:37.261897  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.267346  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.271598  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:37.271676  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:37.314194  430005 cri.go:89] found id: "92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:37.314224  430005 cri.go:89] found id: ""
	I1206 10:36:37.314235  430005 logs.go:282] 1 containers: [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f]
	I1206 10:36:37.314301  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.318839  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:37.318931  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:37.370250  430005 cri.go:89] found id: "016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:37.370276  430005 cri.go:89] found id: ""
	I1206 10:36:37.370287  430005 logs.go:282] 1 containers: [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5]
	I1206 10:36:37.370355  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.374911  430005 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:37.374996  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:37.415011  430005 cri.go:89] found id: ""
	I1206 10:36:37.415044  430005 logs.go:282] 0 containers: []
	W1206 10:36:37.415054  430005 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:37.415062  430005 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 10:36:37.415131  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 10:36:37.452578  430005 cri.go:89] found id: "a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:37.452609  430005 cri.go:89] found id: "54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987"
	I1206 10:36:37.452616  430005 cri.go:89] found id: ""
	I1206 10:36:37.452629  430005 logs.go:282] 2 containers: [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40 54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987]
	I1206 10:36:37.452700  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.457175  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.461900  430005 logs.go:123] Gathering logs for kube-controller-manager [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5] ...
	I1206 10:36:37.461932  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:37.498162  430005 logs.go:123] Gathering logs for storage-provisioner [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40] ...
	I1206 10:36:37.498202  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:37.545701  430005 logs.go:123] Gathering logs for container status ...
	I1206 10:36:37.545768  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:37.588860  430005 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:37.588895  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:37.693126  430005 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:37.693173  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:37.714326  430005 logs.go:123] Gathering logs for etcd [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10] ...
	I1206 10:36:37.714371  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:37.764867  430005 logs.go:123] Gathering logs for coredns [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659] ...
	I1206 10:36:37.764911  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:37.811730  430005 logs.go:123] Gathering logs for kube-scheduler [056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd] ...
	I1206 10:36:37.811772  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:37.855806  430005 logs.go:123] Gathering logs for kube-proxy [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f] ...
	I1206 10:36:37.855855  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:37.900665  430005 logs.go:123] Gathering logs for storage-provisioner [54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987] ...
	I1206 10:36:37.900702  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987"
	I1206 10:36:37.953474  430005 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:37.953509  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:38.282459  430005 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:38.282502  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:38.362783  430005 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:38.362812  430005 logs.go:123] Gathering logs for kube-apiserver [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35] ...
	I1206 10:36:38.362833  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:38.416943  430005 logs.go:123] Gathering logs for kube-scheduler [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d] ...
	I1206 10:36:38.416980  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:41.020439  430005 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I1206 10:36:41.021397  430005 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": dial tcp 192.168.72.144:8443: connect: connection refused
	I1206 10:36:41.021457  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:41.021518  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:41.068202  430005 cri.go:89] found id: "ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:41.068230  430005 cri.go:89] found id: ""
	I1206 10:36:41.068240  430005 logs.go:282] 1 containers: [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35]
	I1206 10:36:41.068312  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.073113  430005 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:41.073197  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:41.112878  430005 cri.go:89] found id: "c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:41.112912  430005 cri.go:89] found id: ""
	I1206 10:36:41.112924  430005 logs.go:282] 1 containers: [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10]
	I1206 10:36:41.113004  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.117999  430005 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:41.118088  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:41.160912  430005 cri.go:89] found id: "1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:41.160939  430005 cri.go:89] found id: ""
	I1206 10:36:41.160949  430005 logs.go:282] 1 containers: [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659]
	I1206 10:36:41.161015  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.166736  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:41.166837  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:41.212116  430005 cri.go:89] found id: "c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:41.212154  430005 cri.go:89] found id: "056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:41.212160  430005 cri.go:89] found id: ""
	I1206 10:36:41.212171  430005 logs.go:282] 2 containers: [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd]
	I1206 10:36:41.212240  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.218517  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.223466  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:41.223536  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:41.271933  430005 cri.go:89] found id: "92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:41.271956  430005 cri.go:89] found id: ""
	I1206 10:36:41.271967  430005 logs.go:282] 1 containers: [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f]
	I1206 10:36:41.272037  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.276689  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:41.276793  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:41.315856  430005 cri.go:89] found id: "016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:41.315882  430005 cri.go:89] found id: ""
	I1206 10:36:41.315892  430005 logs.go:282] 1 containers: [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5]
	I1206 10:36:41.315960  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.321966  430005 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:41.322072  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:41.360094  430005 cri.go:89] found id: ""
	I1206 10:36:41.360130  430005 logs.go:282] 0 containers: []
	W1206 10:36:41.360143  430005 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:41.360152  430005 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 10:36:41.360235  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 10:36:41.405142  430005 cri.go:89] found id: "a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:41.405165  430005 cri.go:89] found id: "54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987"
	I1206 10:36:41.405169  430005 cri.go:89] found id: ""
	I1206 10:36:41.405177  430005 logs.go:282] 2 containers: [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40 54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987]
	I1206 10:36:41.405231  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.410002  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.414618  430005 logs.go:123] Gathering logs for container status ...
	I1206 10:36:41.414656  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:41.464639  430005 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:41.464683  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:41.573231  430005 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:41.573272  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:41.590456  430005 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:41.590489  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:41.668656  430005 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:41.668686  430005 logs.go:123] Gathering logs for etcd [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10] ...
	I1206 10:36:41.668735  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:41.721547  430005 logs.go:123] Gathering logs for kube-scheduler [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d] ...
	I1206 10:36:41.721596  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:41.809490  430005 logs.go:123] Gathering logs for kube-scheduler [056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd] ...
	I1206 10:36:41.809565  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:41.853673  430005 logs.go:123] Gathering logs for kube-proxy [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f] ...
	I1206 10:36:41.853724  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:41.898365  430005 logs.go:123] Gathering logs for kube-controller-manager [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5] ...
	I1206 10:36:41.898402  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:41.936454  430005 logs.go:123] Gathering logs for kube-apiserver [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35] ...
	I1206 10:36:41.936490  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:41.983765  430005 logs.go:123] Gathering logs for coredns [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659] ...
	I1206 10:36:41.983821  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:42.025314  430005 logs.go:123] Gathering logs for storage-provisioner [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40] ...
	I1206 10:36:42.025352  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:38.027144  435050 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1206 10:36:38.027335  435050 start.go:159] libmachine.API.Create for "old-k8s-version-147016" (driver="kvm2")
	I1206 10:36:38.027368  435050 client.go:173] LocalClient.Create starting
	I1206 10:36:38.027431  435050 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem
	I1206 10:36:38.027477  435050 main.go:143] libmachine: Decoding PEM data...
	I1206 10:36:38.027539  435050 main.go:143] libmachine: Parsing certificate...
	I1206 10:36:38.027624  435050 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem
	I1206 10:36:38.027653  435050 main.go:143] libmachine: Decoding PEM data...
	I1206 10:36:38.027664  435050 main.go:143] libmachine: Parsing certificate...
	I1206 10:36:38.028010  435050 main.go:143] libmachine: creating domain...
	I1206 10:36:38.028024  435050 main.go:143] libmachine: creating network...
	I1206 10:36:38.029454  435050 main.go:143] libmachine: found existing default network
	I1206 10:36:38.029694  435050 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 10:36:38.030639  435050 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:71:c2:68} reservation:<nil>}
	I1206 10:36:38.031204  435050 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:cc:ba} reservation:<nil>}
	I1206 10:36:38.031863  435050 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:0c:82} reservation:<nil>}
	I1206 10:36:38.032821  435050 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:e8:4c} reservation:<nil>}
	I1206 10:36:38.034119  435050 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c35220}
	I1206 10:36:38.034210  435050 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-old-k8s-version-147016</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 10:36:38.040752  435050 main.go:143] libmachine: creating private network mk-old-k8s-version-147016 192.168.83.0/24...
	I1206 10:36:38.124146  435050 main.go:143] libmachine: private network mk-old-k8s-version-147016 192.168.83.0/24 created
	I1206 10:36:38.124579  435050 main.go:143] libmachine: <network>
	  <name>mk-old-k8s-version-147016</name>
	  <uuid>9f3d0947-6fab-45f5-8d73-bd70632edeb2</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:fb:f0:5b'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 10:36:38.124631  435050 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016 ...
	I1206 10:36:38.124664  435050 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 10:36:38.124676  435050 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:36:38.124781  435050 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-392561/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 10:36:38.395889  435050 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/id_rsa...
	I1206 10:36:38.561671  435050 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/old-k8s-version-147016.rawdisk...
	I1206 10:36:38.561736  435050 main.go:143] libmachine: Writing magic tar header
	I1206 10:36:38.561763  435050 main.go:143] libmachine: Writing SSH key tar header
	I1206 10:36:38.561845  435050 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016 ...
	I1206 10:36:38.561905  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016
	I1206 10:36:38.561944  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016 (perms=drwx------)
	I1206 10:36:38.561960  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines
	I1206 10:36:38.561970  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines (perms=drwxr-xr-x)
	I1206 10:36:38.561982  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:36:38.561991  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube (perms=drwxr-xr-x)
	I1206 10:36:38.561999  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561
	I1206 10:36:38.562015  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561 (perms=drwxrwxr-x)
	I1206 10:36:38.562029  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 10:36:38.562037  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 10:36:38.562048  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 10:36:38.562055  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 10:36:38.562064  435050 main.go:143] libmachine: checking permissions on dir: /home
	I1206 10:36:38.562070  435050 main.go:143] libmachine: skipping /home - not owner
	I1206 10:36:38.562075  435050 main.go:143] libmachine: defining domain...
	I1206 10:36:38.563507  435050 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>old-k8s-version-147016</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/old-k8s-version-147016.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-old-k8s-version-147016'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 10:36:38.568842  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:ab:48:b3 in network default
	I1206 10:36:38.569450  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:38.569468  435050 main.go:143] libmachine: starting domain...
	I1206 10:36:38.569472  435050 main.go:143] libmachine: ensuring networks are active...
	I1206 10:36:38.570181  435050 main.go:143] libmachine: Ensuring network default is active
	I1206 10:36:38.570550  435050 main.go:143] libmachine: Ensuring network mk-old-k8s-version-147016 is active
	I1206 10:36:38.571104  435050 main.go:143] libmachine: getting domain XML...
	I1206 10:36:38.572492  435050 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>old-k8s-version-147016</name>
	  <uuid>87bee8b8-049d-4cca-9638-3cb05e746fd2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/old-k8s-version-147016.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:cf:ed:fa'/>
	      <source network='mk-old-k8s-version-147016'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ab:48:b3'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 10:36:39.950641  435050 main.go:143] libmachine: waiting for domain to start...
	I1206 10:36:39.952338  435050 main.go:143] libmachine: domain is now running
	I1206 10:36:39.952357  435050 main.go:143] libmachine: waiting for IP...
	I1206 10:36:39.953235  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:39.954070  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:39.954092  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:39.954582  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:39.954635  435050 retry.go:31] will retry after 300.695628ms: waiting for domain to come up
	I1206 10:36:40.257081  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:40.257927  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:40.257952  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:40.258588  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:40.258640  435050 retry.go:31] will retry after 306.155855ms: waiting for domain to come up
	I1206 10:36:40.566275  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:40.567147  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:40.567169  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:40.567678  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:40.567746  435050 retry.go:31] will retry after 416.389234ms: waiting for domain to come up
	I1206 10:36:40.985583  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:40.986333  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:40.986358  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:40.986849  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:40.986911  435050 retry.go:31] will retry after 515.816474ms: waiting for domain to come up
	I1206 10:36:41.504494  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:41.505229  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:41.505274  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:41.505694  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:41.505749  435050 retry.go:31] will retry after 492.253426ms: waiting for domain to come up
	I1206 10:36:41.999585  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:42.000336  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:42.000359  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:42.000831  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:42.000881  435050 retry.go:31] will retry after 741.741494ms: waiting for domain to come up
	I1206 10:36:42.744213  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:42.744931  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:42.744950  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:42.745348  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:42.745394  435050 retry.go:31] will retry after 1.023661448s: waiting for domain to come up
	W1206 10:36:40.704822  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	W1206 10:36:42.705131  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	I1206 10:36:43.703554  434530 pod_ready.go:94] pod "kube-apiserver-pause-672164" is "Ready"
	I1206 10:36:43.703583  434530 pod_ready.go:86] duration metric: took 9.006905112s for pod "kube-apiserver-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.707508  434530 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.713838  434530 pod_ready.go:94] pod "kube-controller-manager-pause-672164" is "Ready"
	I1206 10:36:43.713863  434530 pod_ready.go:86] duration metric: took 6.328142ms for pod "kube-controller-manager-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.717010  434530 pod_ready.go:83] waiting for pod "kube-proxy-qmzzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.722956  434530 pod_ready.go:94] pod "kube-proxy-qmzzj" is "Ready"
	I1206 10:36:43.722983  434530 pod_ready.go:86] duration metric: took 5.949516ms for pod "kube-proxy-qmzzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.725271  434530 pod_ready.go:83] waiting for pod "kube-scheduler-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.900924  434530 pod_ready.go:94] pod "kube-scheduler-pause-672164" is "Ready"
	I1206 10:36:43.900966  434530 pod_ready.go:86] duration metric: took 175.666072ms for pod "kube-scheduler-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.900983  434530 pod_ready.go:40] duration metric: took 15.230499711s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:36:43.945915  434530 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 10:36:43.948102  434530 out.go:179] * Done! kubectl is now configured to use "pause-672164" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.648680615Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017404648647156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e73a5def-1723-47bf-9737-3b2f5cb53b8f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.649667415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bac2304-9dbc-4118-bef3-b3cff69760d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.649724622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bac2304-9dbc-4118-bef3-b3cff69760d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.650002983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bac2304-9dbc-4118-bef3-b3cff69760d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.697117259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=628935db-1c5d-4058-bca4-ae45d2858de2 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.697195648Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=628935db-1c5d-4058-bca4-ae45d2858de2 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.698973859Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e2da6083-4c36-41ef-a734-4d22fb21e7a6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.699592553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017404699525856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e2da6083-4c36-41ef-a734-4d22fb21e7a6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.700698204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e50ccf33-0b27-4861-8e21-b2e89d36301b name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.700801908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e50ccf33-0b27-4861-8e21-b2e89d36301b name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.701227453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e50ccf33-0b27-4861-8e21-b2e89d36301b name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.741157907Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fba6b79-767f-488e-8d11-db9f7d4e6d08 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.741336371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fba6b79-767f-488e-8d11-db9f7d4e6d08 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.743779629Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53219c8e-e940-447e-addf-ff81e4be84f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.744846985Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017404744815565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53219c8e-e940-447e-addf-ff81e4be84f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.746218468Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33fb7569-8f36-44b6-992f-8e3d41af2c36 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.746319308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33fb7569-8f36-44b6-992f-8e3d41af2c36 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.746732221Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33fb7569-8f36-44b6-992f-8e3d41af2c36 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.794385138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=296a15e1-e35f-4453-9384-cfe6d842c0d6 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.794509138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=296a15e1-e35f-4453-9384-cfe6d842c0d6 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.796134035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06d4b6bd-e96e-4b2d-a808-1f9139634840 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.797090450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017404796979574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06d4b6bd-e96e-4b2d-a808-1f9139634840 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.798494185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8fbfbac-8a46-4603-ab2a-402ece16ea38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.798584956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8fbfbac-8a46-4603-ab2a-402ece16ea38 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:44 pause-672164 crio[2559]: time="2025-12-06 10:36:44.798853456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8fbfbac-8a46-4603-ab2a-402ece16ea38 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	8979432950a6c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago       Running             coredns                   1                   99e609d092359       coredns-66bc5c9577-fb62d               kube-system
	e4ffca8ac4a6b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago       Running             kube-apiserver            2                   96f91b00ec54a       kube-apiserver-pause-672164            kube-system
	81620bc09c681       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago       Running             kube-controller-manager   2                   57f690bff20a2       kube-controller-manager-pause-672164   kube-system
	ca28df94e4108       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   44 seconds ago       Running             etcd                      1                   5855c9b5abc42       etcd-pause-672164                      kube-system
	6e9e3a2b7bc2f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   44 seconds ago       Running             kube-proxy                1                   56472d9974f1f       kube-proxy-qmzzj                       kube-system
	c69c972f22220       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   44 seconds ago       Exited              kube-apiserver            1                   96f91b00ec54a       kube-apiserver-pause-672164            kube-system
	0bd30e9a54465       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   44 seconds ago       Exited              kube-controller-manager   1                   57f690bff20a2       kube-controller-manager-pause-672164   kube-system
	7f046926dfe49       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   44 seconds ago       Running             kube-scheduler            1                   7d1f1d6c31fd4       kube-scheduler-pause-672164            kube-system
	6d387cfa7f24b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   75beb4bdee5b1       coredns-66bc5c9577-fb62d               kube-system
	2d5590905c07e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   efbd89472c231       kube-proxy-qmzzj                       kube-system
	90c3e9a407fc1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      0                   4167909c10679       etcd-pause-672164                      kube-system
	2fdf6d5a746b1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Exited              kube-scheduler            0                   c4c188ea6f826       kube-scheduler-pause-672164            kube-system
	
	
	==> coredns [6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37591 - 16463 "HINFO IN 1900317965491801.2060865466064877640. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.092752672s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50122 - 39643 "HINFO IN 1540554435651966770.8493445087611373493. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02778475s
	
	
	==> describe nodes <==
	Name:               pause-672164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-672164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=pause-672164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_35_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-672164
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:36:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    pause-672164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 66b45a3b0c114ec58b30954c27db0a28
	  System UUID:                66b45a3b-0c11-4ec5-8b30-954c27db0a28
	  Boot ID:                    8f23c7cc-dbb6-45ee-ae76-d8d4fe14105b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fb62d                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     90s
	  kube-system                 etcd-pause-672164                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-pause-672164             250m (12%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-pause-672164    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-qmzzj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-pause-672164             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 88s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  NodeHasSufficientPID     96s                kubelet          Node pause-672164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node pause-672164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node pause-672164 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeReady                95s                kubelet          Node pause-672164 status is now: NodeReady
	  Normal  RegisteredNode           91s                node-controller  Node pause-672164 event: Registered Node pause-672164 in Controller
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node pause-672164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node pause-672164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x7 over 41s)  kubelet          Node pause-672164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-672164 event: Registered Node pause-672164 in Controller
	
	
	==> dmesg <==
	[Dec 6 10:34] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001378] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003040] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.171576] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.094809] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097846] kauditd_printk_skb: 130 callbacks suppressed
	[Dec 6 10:35] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.943285] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.475400] kauditd_printk_skb: 190 callbacks suppressed
	[Dec 6 10:36] kauditd_printk_skb: 304 callbacks suppressed
	[ +19.507323] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.705251] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861] <==
	{"level":"warn","ts":"2025-12-06T10:35:19.017291Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"260.677042ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654398807581190815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:16f59af33ab4ea9e>","response":"size:39"}
	{"level":"warn","ts":"2025-12-06T10:35:19.017341Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T10:35:18.628988Z","time spent":"388.350967ms","remote":"127.0.0.1:55148","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-12-06T10:35:19.146746Z","caller":"traceutil/trace.go:172","msg":"trace[249881576] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:385; }","duration":"115.082962ms","start":"2025-12-06T10:35:19.031641Z","end":"2025-12-06T10:35:19.146724Z","steps":["trace[249881576] 'read index received'  (duration: 115.076725ms)","trace[249881576] 'applied index is now lower than readState.Index'  (duration: 5.266µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T10:35:19.276740Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.084793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-fb62d\" limit:1 ","response":"range_response_count:1 size:5628"}
	{"level":"info","ts":"2025-12-06T10:35:19.276822Z","caller":"traceutil/trace.go:172","msg":"trace[286604001] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-fb62d; range_end:; response_count:1; response_revision:374; }","duration":"245.169643ms","start":"2025-12-06T10:35:19.031638Z","end":"2025-12-06T10:35:19.276807Z","steps":["trace[286604001] 'agreement among raft nodes before linearized reading'  (duration: 115.178966ms)","trace[286604001] 'range keys from in-memory index tree'  (duration: 129.732026ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T10:35:19.276757Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.852378ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654398807581190817 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.24\" mod_revision:204 > success:<request_put:<key:\"/registry/masterleases/192.168.39.24\" value_size:66 lease:1654398807581190814 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.24\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T10:35:19.276991Z","caller":"traceutil/trace.go:172","msg":"trace[1608792020] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"257.429207ms","start":"2025-12-06T10:35:19.019552Z","end":"2025-12-06T10:35:19.276981Z","steps":["trace[1608792020] 'process raft request'  (duration: 127.30437ms)","trace[1608792020] 'compare'  (duration: 129.784477ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T10:35:51.142252Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T10:35:51.142347Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-672164","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"error","ts":"2025-12-06T10:35:51.142443Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T10:35:51.225586Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T10:35:51.225737Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T10:35:51.225774Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2025-12-06T10:35:51.225812Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T10:35:51.225873Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-06T10:35:51.225869Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T10:35:51.225962Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T10:35:51.225968Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T10:35:51.226002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T10:35:51.226099Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T10:35:51.226110Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.24:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T10:35:51.230258Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"error","ts":"2025-12-06T10:35:51.230325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.24:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T10:35:51.230347Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2025-12-06T10:35:51.230353Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-672164","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> etcd [ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943] <==
	{"level":"warn","ts":"2025-12-06T10:36:25.643136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.654499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.670871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.684214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.695414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.712992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.725481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.734074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.743803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.750606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.758373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.769290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.778765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.789063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.796182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.807646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.819159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.830528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.841622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.855525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.873256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.891609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.904531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.912971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.959962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:45 up 2 min,  0 users,  load average: 0.48, 0.21, 0.08
	Linux pause-672164 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d] <==
	W1206 10:36:02.131237       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:02.131305       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1206 10:36:02.154781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1206 10:36:02.197234       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 10:36:02.222823       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1206 10:36:02.222875       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1206 10:36:02.229220       1 instance.go:239] Using reconciler: lease
	W1206 10:36:02.245545       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 10:36:02.248239       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:03.131952       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:03.132306       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:03.248628       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:04.594936       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:04.743565       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:04.956190       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:06.956366       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:07.549125       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:07.882407       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:10.527535       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:11.987621       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:12.604829       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:17.443931       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:18.742784       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:19.709967       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1206 10:36:22.232889       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e] <==
	I1206 10:36:26.737769       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 10:36:26.740528       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1206 10:36:26.743184       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 10:36:26.743485       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 10:36:26.744394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 10:36:26.754884       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 10:36:26.754996       1 aggregator.go:171] initial CRD sync complete...
	I1206 10:36:26.755092       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 10:36:26.755100       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 10:36:26.755246       1 cache.go:39] Caches are synced for autoregister controller
	I1206 10:36:26.755325       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 10:36:26.755473       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 10:36:26.782873       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 10:36:26.786476       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 10:36:26.791402       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 10:36:26.791654       1 policy_source.go:240] refreshing policies
	I1206 10:36:26.812510       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 10:36:27.006480       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 10:36:27.538581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 10:36:28.180564       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 10:36:28.230960       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 10:36:28.275177       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 10:36:28.286000       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 10:36:30.107171       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 10:36:30.354620       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0bd30e9a544651c2e7f3f2b99fc219f74f5da0f762cee03f060b5cd77aefa4db] <==
	I1206 10:36:02.626926       1 serving.go:386] Generated self-signed cert in-memory
	I1206 10:36:03.136398       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1206 10:36:03.136432       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:36:03.139750       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 10:36:03.140007       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 10:36:03.140075       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 10:36:03.140294       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 10:36:23.242337       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.24:8443/healthz\": dial tcp 192.168.39.24:8443: connect: connection refused"
	
	
	==> kube-controller-manager [81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407] <==
	I1206 10:36:30.112591       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 10:36:30.112658       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 10:36:30.115496       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 10:36:30.121715       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 10:36:30.124561       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 10:36:30.145946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 10:36:30.148783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 10:36:30.148854       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 10:36:30.148904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 10:36:30.149768       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 10:36:30.149886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:36:30.149899       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 10:36:30.149915       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 10:36:30.150716       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 10:36:30.156571       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 10:36:30.158703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:36:30.159306       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 10:36:30.159437       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 10:36:30.159531       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-672164"
	I1206 10:36:30.159602       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 10:36:30.163890       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 10:36:30.163994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 10:36:30.168441       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 10:36:30.168558       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 10:36:30.171950       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7] <==
	I1206 10:35:16.257251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:35:16.358922       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:35:16.358980       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1206 10:35:16.359122       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:35:16.418386       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 10:35:16.418512       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 10:35:16.418555       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:35:16.430290       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:35:16.430873       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:35:16.430918       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:35:16.436459       1 config.go:200] "Starting service config controller"
	I1206 10:35:16.436506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:35:16.436533       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:35:16.436540       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:35:16.436552       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:35:16.436556       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:35:16.442587       1 config.go:309] "Starting node config controller"
	I1206 10:35:16.442683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:35:16.442692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:35:16.537178       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:35:16.537260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 10:35:16.537141       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4] <==
	E1206 10:36:23.242622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-672164&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:42856->192.168.39.24:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 10:36:26.745143       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:36:26.745237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1206 10:36:26.745403       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:36:26.792148       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 10:36:26.792244       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 10:36:26.792281       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:36:26.805225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:36:26.805597       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:36:26.805617       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:36:26.814458       1 config.go:200] "Starting service config controller"
	I1206 10:36:26.814512       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:36:26.814541       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:36:26.814545       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:36:26.814558       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:36:26.814575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:36:26.815590       1 config.go:309] "Starting node config controller"
	I1206 10:36:26.815601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:36:26.815606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:36:26.915115       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 10:36:26.915151       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:36:26.915179       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82] <==
	E1206 10:35:06.228788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:35:06.249961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:35:06.344084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:35:06.346704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:35:06.466550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:35:06.483949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:35:06.516884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:35:06.520450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:35:06.576069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:35:06.603928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:35:06.614194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:35:06.618746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 10:35:06.630242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:35:06.663004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:35:06.681553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 10:35:06.748948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:35:06.871320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:35:07.941423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1206 10:35:08.601843       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 10:35:51.141229       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 10:35:51.141471       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 10:35:51.147170       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 10:35:51.147230       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 10:35:51.147264       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 10:35:51.147297       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633] <==
	E1206 10:36:23.259236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:36:23.259388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:36:24.118730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:36:24.123299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:36:24.192354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:36:24.219579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:36:24.311647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:36:24.318290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.24:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 10:36:24.335840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:36:24.349198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:36:26.645254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:36:26.645401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:36:26.645485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:36:26.645562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:36:26.645688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:36:26.645816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:36:26.645911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:36:26.646059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:36:26.646228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:36:26.646421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:36:26.646497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:36:26.646584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:36:26.646615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:36:26.659988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 10:36:29.555413       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.137676    3438 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-672164\" not found" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.139248    3438 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-672164\" not found" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.139553    3438 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-672164\" not found" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.681195    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.805758    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-672164\" already exists" pod="kube-system/etcd-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.805927    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.826852    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-672164\" already exists" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.827075    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.836817    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-672164\" already exists" pod="kube-system/kube-controller-manager-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.837075    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.847102    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-672164\" already exists" pod="kube-system/kube-scheduler-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.861919    3438 apiserver.go:52] "Watching apiserver"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.880895    3438 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.886782    3438 kubelet_node_status.go:124] "Node was previously registered" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.887226    3438 kubelet_node_status.go:78] "Successfully registered node" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.887393    3438 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.889617    3438 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.981145    3438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4488afb5-0b73-4848-a3f3-c7336feac4f3-xtables-lock\") pod \"kube-proxy-qmzzj\" (UID: \"4488afb5-0b73-4848-a3f3-c7336feac4f3\") " pod="kube-system/kube-proxy-qmzzj"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.981175    3438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4488afb5-0b73-4848-a3f3-c7336feac4f3-lib-modules\") pod \"kube-proxy-qmzzj\" (UID: \"4488afb5-0b73-4848-a3f3-c7336feac4f3\") " pod="kube-system/kube-proxy-qmzzj"
	Dec 06 10:36:27 pause-672164 kubelet[3438]: I1206 10:36:27.139006    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:27 pause-672164 kubelet[3438]: E1206 10:36:27.148599    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-672164\" already exists" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:34 pause-672164 kubelet[3438]: E1206 10:36:34.020892    3438 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765017394020552356 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 10:36:34 pause-672164 kubelet[3438]: E1206 10:36:34.020921    3438 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765017394020552356 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 10:36:44 pause-672164 kubelet[3438]: E1206 10:36:44.022988    3438 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765017404022167381 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 10:36:44 pause-672164 kubelet[3438]: E1206 10:36:44.023084    3438 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765017404022167381 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-672164 -n pause-672164
helpers_test.go:269: (dbg) Run:  kubectl --context pause-672164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-672164 -n pause-672164
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-672164 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-672164 logs -n 25: (1.329161471s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-777177 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo containerd config dump                                                                                                                                                                                                │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ ssh     │ -p cilium-777177 sudo crio config                                                                                                                                                                                                           │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ delete  │ -p cilium-777177                                                                                                                                                                                                                            │ cilium-777177             │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ start   │ -p guest-968200 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-968200              │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ ssh     │ -p NoKubernetes-012243 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-012243       │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │                     │
	│ delete  │ -p NoKubernetes-012243                                                                                                                                                                                                                      │ NoKubernetes-012243       │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ start   │ -p pause-672164 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                                     │ pause-672164              │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:35 UTC │
	│ delete  │ -p force-systemd-env-294790                                                                                                                                                                                                                 │ force-systemd-env-294790  │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:34 UTC │
	│ start   │ -p cert-expiration-694719 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-694719    │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:35 UTC │
	│ start   │ -p force-systemd-flag-524307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-524307 │ jenkins │ v1.37.0 │ 06 Dec 25 10:34 UTC │ 06 Dec 25 10:35 UTC │
	│ start   │ -p pause-672164 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-672164              │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ force-systemd-flag-524307 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-524307 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ delete  │ -p force-systemd-flag-524307                                                                                                                                                                                                                │ force-systemd-flag-524307 │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:35 UTC │
	│ start   │ -p cert-options-322688 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:35 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ cert-options-322688 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                 │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ -p cert-options-322688 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                               │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete  │ -p cert-options-322688                                                                                                                                                                                                                      │ cert-options-322688       │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start   │ -p old-k8s-version-147016 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-147016    │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:36:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:36:37.950552  435050 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:36:37.950757  435050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:37.950772  435050 out.go:374] Setting ErrFile to fd 2...
	I1206 10:36:37.950778  435050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:37.951005  435050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:36:37.951546  435050 out.go:368] Setting JSON to false
	I1206 10:36:37.952686  435050 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8338,"bootTime":1765009060,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 10:36:37.952763  435050 start.go:143] virtualization: kvm guest
	I1206 10:36:37.958452  435050 out.go:179] * [old-k8s-version-147016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 10:36:37.960162  435050 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:36:37.960163  435050 notify.go:221] Checking for updates...
	I1206 10:36:37.963052  435050 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:36:37.964619  435050 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:36:37.965977  435050 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:36:37.967477  435050 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 10:36:37.969003  435050 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:36:37.971120  435050 config.go:182] Loaded profile config "cert-expiration-694719": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:36:37.971260  435050 config.go:182] Loaded profile config "guest-968200": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 10:36:37.971488  435050 config.go:182] Loaded profile config "pause-672164": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:36:37.971637  435050 config.go:182] Loaded profile config "running-upgrade-976040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 10:36:37.971849  435050 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:36:38.016887  435050 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 10:36:38.018315  435050 start.go:309] selected driver: kvm2
	I1206 10:36:38.018337  435050 start.go:927] validating driver "kvm2" against <nil>
	I1206 10:36:38.018353  435050 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:36:38.019177  435050 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:36:38.019469  435050 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:36:38.019510  435050 cni.go:84] Creating CNI manager for ""
	I1206 10:36:38.019567  435050 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 10:36:38.019578  435050 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 10:36:38.019638  435050 start.go:353] cluster config:
	{Name:old-k8s-version-147016 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-147016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:38.019793  435050 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:36:38.022353  435050 out.go:179] * Starting "old-k8s-version-147016" primary control-plane node in "old-k8s-version-147016" cluster
	I1206 10:36:38.023973  435050 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:36:38.024014  435050 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 10:36:38.024024  435050 cache.go:65] Caching tarball of preloaded images
	I1206 10:36:38.024166  435050 preload.go:238] Found /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1206 10:36:38.024180  435050 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1206 10:36:38.024279  435050 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/config.json ...
	I1206 10:36:38.024300  435050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/config.json: {Name:mkb22350b7c5e8da0bc592e69a175cfc7cd0671e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:38.024467  435050 start.go:360] acquireMachinesLock for old-k8s-version-147016: {Name:mk0e8456872a81874c47f1b4b5997728e70c766d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1206 10:36:38.024525  435050 start.go:364] duration metric: took 31.6µs to acquireMachinesLock for "old-k8s-version-147016"
	I1206 10:36:38.024563  435050 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-147016 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-147016 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:36:38.024621  435050 start.go:125] createHost starting for "" (driver="kvm2")
	W1206 10:36:36.702991  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	W1206 10:36:38.703144  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	I1206 10:36:37.068161  430005 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I1206 10:36:37.068894  430005 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": dial tcp 192.168.72.144:8443: connect: connection refused
	I1206 10:36:37.068959  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:37.069020  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:37.112794  430005 cri.go:89] found id: "ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:37.112828  430005 cri.go:89] found id: ""
	I1206 10:36:37.112839  430005 logs.go:282] 1 containers: [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35]
	I1206 10:36:37.112909  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.117579  430005 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:37.117654  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:37.161855  430005 cri.go:89] found id: "c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:37.161882  430005 cri.go:89] found id: ""
	I1206 10:36:37.161892  430005 logs.go:282] 1 containers: [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10]
	I1206 10:36:37.161953  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.166914  430005 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:37.166993  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:37.217170  430005 cri.go:89] found id: "1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:37.217196  430005 cri.go:89] found id: ""
	I1206 10:36:37.217206  430005 logs.go:282] 1 containers: [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659]
	I1206 10:36:37.217269  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.221935  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:37.222004  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:37.261787  430005 cri.go:89] found id: "c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:37.261812  430005 cri.go:89] found id: "056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:37.261816  430005 cri.go:89] found id: ""
	I1206 10:36:37.261825  430005 logs.go:282] 2 containers: [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd]
	I1206 10:36:37.261897  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.267346  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.271598  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:37.271676  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:37.314194  430005 cri.go:89] found id: "92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:37.314224  430005 cri.go:89] found id: ""
	I1206 10:36:37.314235  430005 logs.go:282] 1 containers: [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f]
	I1206 10:36:37.314301  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.318839  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:37.318931  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:37.370250  430005 cri.go:89] found id: "016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:37.370276  430005 cri.go:89] found id: ""
	I1206 10:36:37.370287  430005 logs.go:282] 1 containers: [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5]
	I1206 10:36:37.370355  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.374911  430005 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:37.374996  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:37.415011  430005 cri.go:89] found id: ""
	I1206 10:36:37.415044  430005 logs.go:282] 0 containers: []
	W1206 10:36:37.415054  430005 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:37.415062  430005 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 10:36:37.415131  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 10:36:37.452578  430005 cri.go:89] found id: "a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:37.452609  430005 cri.go:89] found id: "54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987"
	I1206 10:36:37.452616  430005 cri.go:89] found id: ""
	I1206 10:36:37.452629  430005 logs.go:282] 2 containers: [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40 54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987]
	I1206 10:36:37.452700  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.457175  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:37.461900  430005 logs.go:123] Gathering logs for kube-controller-manager [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5] ...
	I1206 10:36:37.461932  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:37.498162  430005 logs.go:123] Gathering logs for storage-provisioner [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40] ...
	I1206 10:36:37.498202  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:37.545701  430005 logs.go:123] Gathering logs for container status ...
	I1206 10:36:37.545768  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:37.588860  430005 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:37.588895  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:37.693126  430005 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:37.693173  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:37.714326  430005 logs.go:123] Gathering logs for etcd [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10] ...
	I1206 10:36:37.714371  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:37.764867  430005 logs.go:123] Gathering logs for coredns [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659] ...
	I1206 10:36:37.764911  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:37.811730  430005 logs.go:123] Gathering logs for kube-scheduler [056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd] ...
	I1206 10:36:37.811772  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:37.855806  430005 logs.go:123] Gathering logs for kube-proxy [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f] ...
	I1206 10:36:37.855855  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:37.900665  430005 logs.go:123] Gathering logs for storage-provisioner [54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987] ...
	I1206 10:36:37.900702  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987"
	I1206 10:36:37.953474  430005 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:36:37.953509  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:36:38.282459  430005 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:38.282502  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:38.362783  430005 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:38.362812  430005 logs.go:123] Gathering logs for kube-apiserver [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35] ...
	I1206 10:36:38.362833  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:38.416943  430005 logs.go:123] Gathering logs for kube-scheduler [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d] ...
	I1206 10:36:38.416980  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:41.020439  430005 api_server.go:253] Checking apiserver healthz at https://192.168.72.144:8443/healthz ...
	I1206 10:36:41.021397  430005 api_server.go:269] stopped: https://192.168.72.144:8443/healthz: Get "https://192.168.72.144:8443/healthz": dial tcp 192.168.72.144:8443: connect: connection refused
	I1206 10:36:41.021457  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:36:41.021518  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:36:41.068202  430005 cri.go:89] found id: "ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:41.068230  430005 cri.go:89] found id: ""
	I1206 10:36:41.068240  430005 logs.go:282] 1 containers: [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35]
	I1206 10:36:41.068312  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.073113  430005 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:36:41.073197  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:36:41.112878  430005 cri.go:89] found id: "c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:41.112912  430005 cri.go:89] found id: ""
	I1206 10:36:41.112924  430005 logs.go:282] 1 containers: [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10]
	I1206 10:36:41.113004  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.117999  430005 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:36:41.118088  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:36:41.160912  430005 cri.go:89] found id: "1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:41.160939  430005 cri.go:89] found id: ""
	I1206 10:36:41.160949  430005 logs.go:282] 1 containers: [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659]
	I1206 10:36:41.161015  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.166736  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:36:41.166837  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:36:41.212116  430005 cri.go:89] found id: "c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:41.212154  430005 cri.go:89] found id: "056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:41.212160  430005 cri.go:89] found id: ""
	I1206 10:36:41.212171  430005 logs.go:282] 2 containers: [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd]
	I1206 10:36:41.212240  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.218517  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.223466  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:36:41.223536  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:36:41.271933  430005 cri.go:89] found id: "92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:41.271956  430005 cri.go:89] found id: ""
	I1206 10:36:41.271967  430005 logs.go:282] 1 containers: [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f]
	I1206 10:36:41.272037  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.276689  430005 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:36:41.276793  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:36:41.315856  430005 cri.go:89] found id: "016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:41.315882  430005 cri.go:89] found id: ""
	I1206 10:36:41.315892  430005 logs.go:282] 1 containers: [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5]
	I1206 10:36:41.315960  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.321966  430005 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:36:41.322072  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:36:41.360094  430005 cri.go:89] found id: ""
	I1206 10:36:41.360130  430005 logs.go:282] 0 containers: []
	W1206 10:36:41.360143  430005 logs.go:284] No container was found matching "kindnet"
	I1206 10:36:41.360152  430005 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 10:36:41.360235  430005 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 10:36:41.405142  430005 cri.go:89] found id: "a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:41.405165  430005 cri.go:89] found id: "54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987"
	I1206 10:36:41.405169  430005 cri.go:89] found id: ""
	I1206 10:36:41.405177  430005 logs.go:282] 2 containers: [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40 54ddbf48cf44492ad8242a0598ba2231fd4f93ee0a4cec92d94c6def9c980987]
	I1206 10:36:41.405231  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.410002  430005 ssh_runner.go:195] Run: which crictl
	I1206 10:36:41.414618  430005 logs.go:123] Gathering logs for container status ...
	I1206 10:36:41.414656  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:36:41.464639  430005 logs.go:123] Gathering logs for kubelet ...
	I1206 10:36:41.464683  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:36:41.573231  430005 logs.go:123] Gathering logs for dmesg ...
	I1206 10:36:41.573272  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:36:41.590456  430005 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:36:41.590489  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:36:41.668656  430005 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:36:41.668686  430005 logs.go:123] Gathering logs for etcd [c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10] ...
	I1206 10:36:41.668735  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c15436bd63f95394a3ea07f1a2070a38ed06ffd3d171293441fd7a3673e10d10"
	I1206 10:36:41.721547  430005 logs.go:123] Gathering logs for kube-scheduler [c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d] ...
	I1206 10:36:41.721596  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c662e04651e0c43353787dce2b7695a04ebb8cf2e731530ce5121a2011ac2b2d"
	I1206 10:36:41.809490  430005 logs.go:123] Gathering logs for kube-scheduler [056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd] ...
	I1206 10:36:41.809565  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056206a52c3bfc3442f384df1541dbb81537b66537f72a1a5b91a7c75ec9c8fd"
	I1206 10:36:41.853673  430005 logs.go:123] Gathering logs for kube-proxy [92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f] ...
	I1206 10:36:41.853724  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92afa13c7c57b1c0db461ba6435fdef02f75277bda55beaae5114aa80081e98f"
	I1206 10:36:41.898365  430005 logs.go:123] Gathering logs for kube-controller-manager [016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5] ...
	I1206 10:36:41.898402  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 016cfd65d4fe02a416314f19fb648eae4fa5bc024733f20a4f380fd40d44cbf5"
	I1206 10:36:41.936454  430005 logs.go:123] Gathering logs for kube-apiserver [ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35] ...
	I1206 10:36:41.936490  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef42a54ef499d5a2adf7f4aed819db372c989f7e0630b582fd079e349b6cae35"
	I1206 10:36:41.983765  430005 logs.go:123] Gathering logs for coredns [1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659] ...
	I1206 10:36:41.983821  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1939c8beb8b6117d08ce2adf290e6727b940fb305ab1a764cc15c20fa6939659"
	I1206 10:36:42.025314  430005 logs.go:123] Gathering logs for storage-provisioner [a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40] ...
	I1206 10:36:42.025352  430005 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9a482b1344e3ccf89f925482a320afd853f1776a4fb8bb8b6ab3c0bc3344b40"
	I1206 10:36:38.027144  435050 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1206 10:36:38.027335  435050 start.go:159] libmachine.API.Create for "old-k8s-version-147016" (driver="kvm2")
	I1206 10:36:38.027368  435050 client.go:173] LocalClient.Create starting
	I1206 10:36:38.027431  435050 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-392561/.minikube/certs/ca.pem
	I1206 10:36:38.027477  435050 main.go:143] libmachine: Decoding PEM data...
	I1206 10:36:38.027539  435050 main.go:143] libmachine: Parsing certificate...
	I1206 10:36:38.027624  435050 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-392561/.minikube/certs/cert.pem
	I1206 10:36:38.027653  435050 main.go:143] libmachine: Decoding PEM data...
	I1206 10:36:38.027664  435050 main.go:143] libmachine: Parsing certificate...
	I1206 10:36:38.028010  435050 main.go:143] libmachine: creating domain...
	I1206 10:36:38.028024  435050 main.go:143] libmachine: creating network...
	I1206 10:36:38.029454  435050 main.go:143] libmachine: found existing default network
	I1206 10:36:38.029694  435050 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 10:36:38.030639  435050 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:71:c2:68} reservation:<nil>}
	I1206 10:36:38.031204  435050 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:cc:ba} reservation:<nil>}
	I1206 10:36:38.031863  435050 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:21:0c:82} reservation:<nil>}
	I1206 10:36:38.032821  435050 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:8f:e8:4c} reservation:<nil>}
	I1206 10:36:38.034119  435050 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c35220}
	I1206 10:36:38.034210  435050 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-old-k8s-version-147016</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 10:36:38.040752  435050 main.go:143] libmachine: creating private network mk-old-k8s-version-147016 192.168.83.0/24...
	I1206 10:36:38.124146  435050 main.go:143] libmachine: private network mk-old-k8s-version-147016 192.168.83.0/24 created
	I1206 10:36:38.124579  435050 main.go:143] libmachine: <network>
	  <name>mk-old-k8s-version-147016</name>
	  <uuid>9f3d0947-6fab-45f5-8d73-bd70632edeb2</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:fb:f0:5b'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1206 10:36:38.124631  435050 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016 ...
	I1206 10:36:38.124664  435050 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 10:36:38.124676  435050 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:36:38.124781  435050 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22047-392561/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso...
	I1206 10:36:38.395889  435050 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/id_rsa...
	I1206 10:36:38.561671  435050 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/old-k8s-version-147016.rawdisk...
	I1206 10:36:38.561736  435050 main.go:143] libmachine: Writing magic tar header
	I1206 10:36:38.561763  435050 main.go:143] libmachine: Writing SSH key tar header
	I1206 10:36:38.561845  435050 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016 ...
	I1206 10:36:38.561905  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016
	I1206 10:36:38.561944  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016 (perms=drwx------)
	I1206 10:36:38.561960  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube/machines
	I1206 10:36:38.561970  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube/machines (perms=drwxr-xr-x)
	I1206 10:36:38.561982  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:36:38.561991  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561/.minikube (perms=drwxr-xr-x)
	I1206 10:36:38.561999  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22047-392561
	I1206 10:36:38.562015  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22047-392561 (perms=drwxrwxr-x)
	I1206 10:36:38.562029  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1206 10:36:38.562037  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1206 10:36:38.562048  435050 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1206 10:36:38.562055  435050 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1206 10:36:38.562064  435050 main.go:143] libmachine: checking permissions on dir: /home
	I1206 10:36:38.562070  435050 main.go:143] libmachine: skipping /home - not owner
	I1206 10:36:38.562075  435050 main.go:143] libmachine: defining domain...
	I1206 10:36:38.563507  435050 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>old-k8s-version-147016</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/old-k8s-version-147016.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-old-k8s-version-147016'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1206 10:36:38.568842  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:ab:48:b3 in network default
	I1206 10:36:38.569450  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:38.569468  435050 main.go:143] libmachine: starting domain...
	I1206 10:36:38.569472  435050 main.go:143] libmachine: ensuring networks are active...
	I1206 10:36:38.570181  435050 main.go:143] libmachine: Ensuring network default is active
	I1206 10:36:38.570550  435050 main.go:143] libmachine: Ensuring network mk-old-k8s-version-147016 is active
	I1206 10:36:38.571104  435050 main.go:143] libmachine: getting domain XML...
	I1206 10:36:38.572492  435050 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>old-k8s-version-147016</name>
	  <uuid>87bee8b8-049d-4cca-9638-3cb05e746fd2</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22047-392561/.minikube/machines/old-k8s-version-147016/old-k8s-version-147016.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:cf:ed:fa'/>
	      <source network='mk-old-k8s-version-147016'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:ab:48:b3'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1206 10:36:39.950641  435050 main.go:143] libmachine: waiting for domain to start...
	I1206 10:36:39.952338  435050 main.go:143] libmachine: domain is now running
	I1206 10:36:39.952357  435050 main.go:143] libmachine: waiting for IP...
	I1206 10:36:39.953235  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:39.954070  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:39.954092  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:39.954582  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:39.954635  435050 retry.go:31] will retry after 300.695628ms: waiting for domain to come up
	I1206 10:36:40.257081  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:40.257927  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:40.257952  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:40.258588  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:40.258640  435050 retry.go:31] will retry after 306.155855ms: waiting for domain to come up
	I1206 10:36:40.566275  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:40.567147  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:40.567169  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:40.567678  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:40.567746  435050 retry.go:31] will retry after 416.389234ms: waiting for domain to come up
	I1206 10:36:40.985583  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:40.986333  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:40.986358  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:40.986849  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:40.986911  435050 retry.go:31] will retry after 515.816474ms: waiting for domain to come up
	I1206 10:36:41.504494  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:41.505229  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:41.505274  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:41.505694  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:41.505749  435050 retry.go:31] will retry after 492.253426ms: waiting for domain to come up
	I1206 10:36:41.999585  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:42.000336  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:42.000359  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:42.000831  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:42.000881  435050 retry.go:31] will retry after 741.741494ms: waiting for domain to come up
	I1206 10:36:42.744213  435050 main.go:143] libmachine: domain old-k8s-version-147016 has defined MAC address 52:54:00:cf:ed:fa in network mk-old-k8s-version-147016
	I1206 10:36:42.744931  435050 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-147016 (source=lease)
	I1206 10:36:42.744950  435050 main.go:143] libmachine: trying to list again with source=arp
	I1206 10:36:42.745348  435050 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-147016 in network mk-old-k8s-version-147016 (interfaces detected: [])
	I1206 10:36:42.745394  435050 retry.go:31] will retry after 1.023661448s: waiting for domain to come up
	W1206 10:36:40.704822  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	W1206 10:36:42.705131  434530 pod_ready.go:104] pod "kube-apiserver-pause-672164" is not "Ready", error: <nil>
	I1206 10:36:43.703554  434530 pod_ready.go:94] pod "kube-apiserver-pause-672164" is "Ready"
	I1206 10:36:43.703583  434530 pod_ready.go:86] duration metric: took 9.006905112s for pod "kube-apiserver-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.707508  434530 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.713838  434530 pod_ready.go:94] pod "kube-controller-manager-pause-672164" is "Ready"
	I1206 10:36:43.713863  434530 pod_ready.go:86] duration metric: took 6.328142ms for pod "kube-controller-manager-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.717010  434530 pod_ready.go:83] waiting for pod "kube-proxy-qmzzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.722956  434530 pod_ready.go:94] pod "kube-proxy-qmzzj" is "Ready"
	I1206 10:36:43.722983  434530 pod_ready.go:86] duration metric: took 5.949516ms for pod "kube-proxy-qmzzj" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.725271  434530 pod_ready.go:83] waiting for pod "kube-scheduler-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.900924  434530 pod_ready.go:94] pod "kube-scheduler-pause-672164" is "Ready"
	I1206 10:36:43.900966  434530 pod_ready.go:86] duration metric: took 175.666072ms for pod "kube-scheduler-pause-672164" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:36:43.900983  434530 pod_ready.go:40] duration metric: took 15.230499711s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:36:43.945915  434530 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 10:36:43.948102  434530 out.go:179] * Done! kubectl is now configured to use "pause-672164" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.694007990Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017406693949198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b29a1a02-b44f-4532-b3ba-0ae850246ce1 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.695366890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58b95f56-a470-45c3-bb84-05be61968f62 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.695455616Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58b95f56-a470-45c3-bb84-05be61968f62 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.696190958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58b95f56-a470-45c3-bb84-05be61968f62 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.746321724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=652d1b69-8901-44f8-ba43-69935048bc09 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.746444752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=652d1b69-8901-44f8-ba43-69935048bc09 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.748589424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=27b39d2c-17d9-4c10-aae6-c97c8097117d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.749470686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017406749433822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27b39d2c-17d9-4c10-aae6-c97c8097117d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.750579263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f8ae2df-70a4-4abe-bdee-e376029da007 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.750680135Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f8ae2df-70a4-4abe-bdee-e376029da007 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.751466285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f8ae2df-70a4-4abe-bdee-e376029da007 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.793186840Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49cf5b5c-2cbf-485c-b0bd-20ece3efbcf5 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.793256929Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49cf5b5c-2cbf-485c-b0bd-20ece3efbcf5 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.795571843Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c9c1ee0-2952-4585-9de9-72732538d612 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.795974036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017406795951432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c9c1ee0-2952-4585-9de9-72732538d612 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.797318444Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f91f8f8-b0eb-4a80-a040-97e195796ea9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.797394503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f91f8f8-b0eb-4a80-a040-97e195796ea9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.797678461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f91f8f8-b0eb-4a80-a040-97e195796ea9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.837810105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f530bab3-b6fa-4816-8941-4b29aeedab83 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.837888960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f530bab3-b6fa-4816-8941-4b29aeedab83 name=/runtime.v1.RuntimeService/Version
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.839251744Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2350e4d-12cb-427f-a7f9-304d0a347bb6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.839611174Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765017406839588505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2350e4d-12cb-427f-a7f9-304d0a347bb6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.840771514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7296d5b-6197-4a91-8782-369c1f1bca30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.844089742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7296d5b-6197-4a91-8782-369c1f1bca30 name=/runtime.v1.RuntimeService/ListContainers
	Dec 06 10:36:46 pause-672164 crio[2559]: time="2025-12-06 10:36:46.844460233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a,PodSandboxId:99e609d092359786634cd5efd0aea277ecd7255b007761d40c9bb9f216c81476,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765017387192120352,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765017384141083700,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12da
f485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765017384127986605,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943,PodSandboxId:5855c9b5abc427d12d4af700ed4faea9b3b721d2319059757a5d51937652abcf,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120
f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765017360775343824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4,PodSandboxId:56472d9974f1f9f0f703fc719d7257fb402ba4f32e5dd3fbf71296015640386a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1
,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765017360671800743,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d,PodSandboxId:96f91b00ec54aa6941734a4c5d60a8da4d942f04550aee3a058321655734aeab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f
569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765017360579726461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73e60cc2d982768ad976362e52467fd0,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633,PodSandboxId:7d1f1d6c31fd46082d1d5d2564230a77bc8a886b722d15698
2c5d96502aff2dd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765017360463302968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bd30e9a544651c2e7f3f2
b99fc219f74f5da0f762cee03f060b5cd77aefa4db,PodSandboxId:57f690bff20a2d5c7ad517c3b5496b1156100bc04274085178b8c65dd74102bd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765017360469091993,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 756e0750a6732f41f16f4b7b8e627d9b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712,PodSandboxId:75beb4bdee5b117fe7ef6b48e84b4db2bce262a65a0b817f4bd1b0248137557d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765017316240432710,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fb62d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88027edc-48b1-4cef-b502-1862cea06db0,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7,PodSandboxId:efbd89472c2319edaf08530523fec9f4ddbb54be7f2f9f618c337467d84e3a3e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765017315834529074,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name:
kube-proxy-qmzzj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4488afb5-0b73-4848-a3f3-c7336feac4f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861,PodSandboxId:4167909c1067976616c1119a060169cac5524f93d78fc01670d9903533393729,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765017301876202735,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-672164,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 79289946cd25f29b404867a34cf3287b,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82,PodSandboxId:c4c188ea6f8262a2b295144f00d1faea89115c0676f4f0dc24ae1aa08e36fef9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765017301869342260,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-672164,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bc552323c16a27b410b35f696f26c30,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7296d5b-6197-4a91-8782-369c1f1bca30 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	8979432950a6c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   19 seconds ago       Running             coredns                   1                   99e609d092359       coredns-66bc5c9577-fb62d               kube-system
	e4ffca8ac4a6b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   22 seconds ago       Running             kube-apiserver            2                   96f91b00ec54a       kube-apiserver-pause-672164            kube-system
	81620bc09c681       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   22 seconds ago       Running             kube-controller-manager   2                   57f690bff20a2       kube-controller-manager-pause-672164   kube-system
	ca28df94e4108       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   46 seconds ago       Running             etcd                      1                   5855c9b5abc42       etcd-pause-672164                      kube-system
	6e9e3a2b7bc2f       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   46 seconds ago       Running             kube-proxy                1                   56472d9974f1f       kube-proxy-qmzzj                       kube-system
	c69c972f22220       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   46 seconds ago       Exited              kube-apiserver            1                   96f91b00ec54a       kube-apiserver-pause-672164            kube-system
	0bd30e9a54465       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   46 seconds ago       Exited              kube-controller-manager   1                   57f690bff20a2       kube-controller-manager-pause-672164   kube-system
	7f046926dfe49       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   46 seconds ago       Running             kube-scheduler            1                   7d1f1d6c31fd4       kube-scheduler-pause-672164            kube-system
	6d387cfa7f24b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   75beb4bdee5b1       coredns-66bc5c9577-fb62d               kube-system
	2d5590905c07e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   About a minute ago   Exited              kube-proxy                0                   efbd89472c231       kube-proxy-qmzzj                       kube-system
	90c3e9a407fc1       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   About a minute ago   Exited              etcd                      0                   4167909c10679       etcd-pause-672164                      kube-system
	2fdf6d5a746b1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   About a minute ago   Exited              kube-scheduler            0                   c4c188ea6f826       kube-scheduler-pause-672164            kube-system
	
	
	==> coredns [6d387cfa7f24b3b950f15bd033e199852ff6c5648f9ef8df2f4cde0ca3ec3712] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37591 - 16463 "HINFO IN 1900317965491801.2060865466064877640. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.092752672s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8979432950a6c1fcf67c68b2eb4f02b47f20805a424008f709a4c7c1336ae55a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50122 - 39643 "HINFO IN 1540554435651966770.8493445087611373493. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02778475s
	
	
	==> describe nodes <==
	Name:               pause-672164
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-672164
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=pause-672164
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_35_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:35:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-672164
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:36:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:36:26 +0000   Sat, 06 Dec 2025 10:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.24
	  Hostname:    pause-672164
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 66b45a3b0c114ec58b30954c27db0a28
	  System UUID:                66b45a3b-0c11-4ec5-8b30-954c27db0a28
	  Boot ID:                    8f23c7cc-dbb6-45ee-ae76-d8d4fe14105b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-fb62d                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     92s
	  kube-system                 etcd-pause-672164                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         98s
	  kube-system                 kube-apiserver-pause-672164             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-672164    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-qmzzj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-pause-672164             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     98s                kubelet          Node pause-672164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node pause-672164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node pause-672164 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeReady                97s                kubelet          Node pause-672164 status is now: NodeReady
	  Normal  RegisteredNode           93s                node-controller  Node pause-672164 event: Registered Node pause-672164 in Controller
	  Normal  Starting                 44s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node pause-672164 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node pause-672164 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x7 over 43s)  kubelet          Node pause-672164 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                node-controller  Node pause-672164 event: Registered Node pause-672164 in Controller
	
	
	==> dmesg <==
	[Dec 6 10:34] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001378] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003040] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.171576] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.094809] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097846] kauditd_printk_skb: 130 callbacks suppressed
	[Dec 6 10:35] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.943285] kauditd_printk_skb: 18 callbacks suppressed
	[ +29.475400] kauditd_printk_skb: 190 callbacks suppressed
	[Dec 6 10:36] kauditd_printk_skb: 304 callbacks suppressed
	[ +19.507323] kauditd_printk_skb: 12 callbacks suppressed
	[  +1.705251] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [90c3e9a407fc16ba86ec497af261fcd4e80faf407cc3e556c33ecdaf3221f861] <==
	{"level":"warn","ts":"2025-12-06T10:35:19.017291Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"260.677042ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654398807581190815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:16f59af33ab4ea9e>","response":"size:39"}
	{"level":"warn","ts":"2025-12-06T10:35:19.017341Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-06T10:35:18.628988Z","time spent":"388.350967ms","remote":"127.0.0.1:55148","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-12-06T10:35:19.146746Z","caller":"traceutil/trace.go:172","msg":"trace[249881576] linearizableReadLoop","detail":"{readStateIndex:385; appliedIndex:385; }","duration":"115.082962ms","start":"2025-12-06T10:35:19.031641Z","end":"2025-12-06T10:35:19.146724Z","steps":["trace[249881576] 'read index received'  (duration: 115.076725ms)","trace[249881576] 'applied index is now lower than readState.Index'  (duration: 5.266µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T10:35:19.276740Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"245.084793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-fb62d\" limit:1 ","response":"range_response_count:1 size:5628"}
	{"level":"info","ts":"2025-12-06T10:35:19.276822Z","caller":"traceutil/trace.go:172","msg":"trace[286604001] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-fb62d; range_end:; response_count:1; response_revision:374; }","duration":"245.169643ms","start":"2025-12-06T10:35:19.031638Z","end":"2025-12-06T10:35:19.276807Z","steps":["trace[286604001] 'agreement among raft nodes before linearized reading'  (duration: 115.178966ms)","trace[286604001] 'range keys from in-memory index tree'  (duration: 129.732026ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-06T10:35:19.276757Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.852378ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1654398807581190817 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.24\" mod_revision:204 > success:<request_put:<key:\"/registry/masterleases/192.168.39.24\" value_size:66 lease:1654398807581190814 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.24\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-06T10:35:19.276991Z","caller":"traceutil/trace.go:172","msg":"trace[1608792020] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"257.429207ms","start":"2025-12-06T10:35:19.019552Z","end":"2025-12-06T10:35:19.276981Z","steps":["trace[1608792020] 'process raft request'  (duration: 127.30437ms)","trace[1608792020] 'compare'  (duration: 129.784477ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-06T10:35:51.142252Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T10:35:51.142347Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-672164","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	{"level":"error","ts":"2025-12-06T10:35:51.142443Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T10:35:51.225586Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T10:35:51.225737Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T10:35:51.225774Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"602226ed500416f5","current-leader-member-id":"602226ed500416f5"}
	{"level":"info","ts":"2025-12-06T10:35:51.225812Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T10:35:51.225873Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-06T10:35:51.225869Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T10:35:51.225962Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T10:35:51.225968Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T10:35:51.226002Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T10:35:51.226099Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.24:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T10:35:51.226110Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.24:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T10:35:51.230258Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"error","ts":"2025-12-06T10:35:51.230325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.24:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T10:35:51.230347Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.24:2380"}
	{"level":"info","ts":"2025-12-06T10:35:51.230353Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-672164","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.24:2380"],"advertise-client-urls":["https://192.168.39.24:2379"]}
	
	
	==> etcd [ca28df94e41085ddadb99a077625caff629e055cc8ae7009649fb618a1f8c943] <==
	{"level":"warn","ts":"2025-12-06T10:36:25.643136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.654499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.670871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.684214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.695414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.712992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.725481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.734074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.743803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.750606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.758373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.769290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.778765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.789063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.796182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.807646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.819159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.830528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.841622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.855525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.873256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.891609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.904531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.912971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:36:25.959962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:47 up 2 min,  0 users,  load average: 0.48, 0.21, 0.08
	Linux pause-672164 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Dec  4 13:30:13 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c69c972f222208cf4d5a370865dbfb06eae3160d31223dd19809e5b0cf80378d] <==
	W1206 10:36:02.131237       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:02.131305       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1206 10:36:02.154781       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1206 10:36:02.197234       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 10:36:02.222823       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1206 10:36:02.222875       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1206 10:36:02.229220       1 instance.go:239] Using reconciler: lease
	W1206 10:36:02.245545       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1206 10:36:02.248239       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:03.131952       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:03.132306       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:03.248628       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:04.594936       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:04.743565       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:04.956190       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:06.956366       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:07.549125       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:07.882407       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:10.527535       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:11.987621       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:12.604829       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:17.443931       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:18.742784       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 10:36:19.709967       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1206 10:36:22.232889       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [e4ffca8ac4a6b84e6d70cdaebac985a812ae72c311b3827a79f576f90f19e53e] <==
	I1206 10:36:26.737769       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 10:36:26.740528       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1206 10:36:26.743184       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 10:36:26.743485       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 10:36:26.744394       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 10:36:26.754884       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1206 10:36:26.754996       1 aggregator.go:171] initial CRD sync complete...
	I1206 10:36:26.755092       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 10:36:26.755100       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 10:36:26.755246       1 cache.go:39] Caches are synced for autoregister controller
	I1206 10:36:26.755325       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 10:36:26.755473       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 10:36:26.782873       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 10:36:26.786476       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 10:36:26.791402       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 10:36:26.791654       1 policy_source.go:240] refreshing policies
	I1206 10:36:26.812510       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 10:36:27.006480       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 10:36:27.538581       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 10:36:28.180564       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 10:36:28.230960       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 10:36:28.275177       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 10:36:28.286000       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 10:36:30.107171       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 10:36:30.354620       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0bd30e9a544651c2e7f3f2b99fc219f74f5da0f762cee03f060b5cd77aefa4db] <==
	I1206 10:36:02.626926       1 serving.go:386] Generated self-signed cert in-memory
	I1206 10:36:03.136398       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1206 10:36:03.136432       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:36:03.139750       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1206 10:36:03.140007       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1206 10:36:03.140075       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1206 10:36:03.140294       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1206 10:36:23.242337       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.24:8443/healthz\": dial tcp 192.168.39.24:8443: connect: connection refused"
	
	
	==> kube-controller-manager [81620bc09c681970184caacee9d8f2bdbab6c32a58f39a58446b72afb8dc9407] <==
	I1206 10:36:30.112591       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 10:36:30.112658       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 10:36:30.115496       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 10:36:30.121715       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 10:36:30.124561       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 10:36:30.145946       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 10:36:30.148783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1206 10:36:30.148854       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 10:36:30.148904       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 10:36:30.149768       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1206 10:36:30.149886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:36:30.149899       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 10:36:30.149915       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 10:36:30.150716       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 10:36:30.156571       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 10:36:30.158703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:36:30.159306       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 10:36:30.159437       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 10:36:30.159531       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-672164"
	I1206 10:36:30.159602       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 10:36:30.163890       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 10:36:30.163994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1206 10:36:30.168441       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 10:36:30.168558       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 10:36:30.171950       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	
	
	==> kube-proxy [2d5590905c07e4e7296409a32a3292feda5f28ec20bf8b8339a897004396bfb7] <==
	I1206 10:35:16.257251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:35:16.358922       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:35:16.358980       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1206 10:35:16.359122       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:35:16.418386       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 10:35:16.418512       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 10:35:16.418555       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:35:16.430290       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:35:16.430873       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:35:16.430918       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:35:16.436459       1 config.go:200] "Starting service config controller"
	I1206 10:35:16.436506       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:35:16.436533       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:35:16.436540       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:35:16.436552       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:35:16.436556       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:35:16.442587       1 config.go:309] "Starting node config controller"
	I1206 10:35:16.442683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:35:16.442692       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:35:16.537178       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:35:16.537260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 10:35:16.537141       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [6e9e3a2b7bc2fda1638f69e12db42711d3ac02ebc790d67250daf6b15c963eb4] <==
	E1206 10:36:23.242622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-672164&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.24:42856->192.168.39.24:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 10:36:26.745143       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:36:26.745237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.24"]
	E1206 10:36:26.745403       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:36:26.792148       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1206 10:36:26.792244       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1206 10:36:26.792281       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:36:26.805225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:36:26.805597       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:36:26.805617       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:36:26.814458       1 config.go:200] "Starting service config controller"
	I1206 10:36:26.814512       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:36:26.814541       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:36:26.814545       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:36:26.814558       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:36:26.814575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:36:26.815590       1 config.go:309] "Starting node config controller"
	I1206 10:36:26.815601       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:36:26.815606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:36:26.915115       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 10:36:26.915151       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:36:26.915179       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2fdf6d5a746b1b843dae69265868007ab66276f60458b7eac802101cb3fa0b82] <==
	E1206 10:35:06.228788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:35:06.249961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:35:06.344084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:35:06.346704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:35:06.466550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:35:06.483949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:35:06.516884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:35:06.520450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:35:06.576069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:35:06.603928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:35:06.614194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:35:06.618746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 10:35:06.630242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:35:06.663004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:35:06.681553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 10:35:06.748948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:35:06.871320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:35:07.941423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1206 10:35:08.601843       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 10:35:51.141229       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 10:35:51.141471       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 10:35:51.147170       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 10:35:51.147230       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 10:35:51.147264       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 10:35:51.147297       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7f046926dfe49bcebfc784103713fd9e1334f5b98b8672a1a303e0c39ddb8633] <==
	E1206 10:36:23.259236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.24:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:36:23.259388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.24:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:36:24.118730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.24:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:36:24.123299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.24:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:36:24.192354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.24:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:36:24.219579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.24:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:36:24.311647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.24:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:36:24.318290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.24:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 10:36:24.335840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.24:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:36:24.349198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.24:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.24:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:36:26.645254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:36:26.645401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:36:26.645485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:36:26.645562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:36:26.645688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:36:26.645816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:36:26.645911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:36:26.646059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:36:26.646228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:36:26.646421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:36:26.646497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:36:26.646584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:36:26.646615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:36:26.659988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1206 10:36:29.555413       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.137676    3438 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-672164\" not found" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.139248    3438 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-672164\" not found" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.139553    3438 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-672164\" not found" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.681195    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.805758    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-672164\" already exists" pod="kube-system/etcd-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.805927    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.826852    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-672164\" already exists" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.827075    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.836817    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-672164\" already exists" pod="kube-system/kube-controller-manager-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.837075    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: E1206 10:36:26.847102    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-672164\" already exists" pod="kube-system/kube-scheduler-pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.861919    3438 apiserver.go:52] "Watching apiserver"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.880895    3438 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.886782    3438 kubelet_node_status.go:124] "Node was previously registered" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.887226    3438 kubelet_node_status.go:78] "Successfully registered node" node="pause-672164"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.887393    3438 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.889617    3438 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.981145    3438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4488afb5-0b73-4848-a3f3-c7336feac4f3-xtables-lock\") pod \"kube-proxy-qmzzj\" (UID: \"4488afb5-0b73-4848-a3f3-c7336feac4f3\") " pod="kube-system/kube-proxy-qmzzj"
	Dec 06 10:36:26 pause-672164 kubelet[3438]: I1206 10:36:26.981175    3438 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4488afb5-0b73-4848-a3f3-c7336feac4f3-lib-modules\") pod \"kube-proxy-qmzzj\" (UID: \"4488afb5-0b73-4848-a3f3-c7336feac4f3\") " pod="kube-system/kube-proxy-qmzzj"
	Dec 06 10:36:27 pause-672164 kubelet[3438]: I1206 10:36:27.139006    3438 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:27 pause-672164 kubelet[3438]: E1206 10:36:27.148599    3438 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-672164\" already exists" pod="kube-system/kube-apiserver-pause-672164"
	Dec 06 10:36:34 pause-672164 kubelet[3438]: E1206 10:36:34.020892    3438 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765017394020552356 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 10:36:34 pause-672164 kubelet[3438]: E1206 10:36:34.020921    3438 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765017394020552356 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 10:36:44 pause-672164 kubelet[3438]: E1206 10:36:44.022988    3438 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765017404022167381 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 06 10:36:44 pause-672164 kubelet[3438]: E1206 10:36:44.023084    3438 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765017404022167381 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-672164 -n pause-672164
helpers_test.go:269: (dbg) Run:  kubectl --context pause-672164 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (58.01s)

                                                
                                    

Test pass (365/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.31
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 9.49
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 10.33
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.13
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.67
31 TestOffline 69.02
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 127.13
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 11.53
44 TestAddons/parallel/Registry 18.77
45 TestAddons/parallel/RegistryCreds 0.71
47 TestAddons/parallel/InspektorGadget 11.71
48 TestAddons/parallel/MetricsServer 6.99
50 TestAddons/parallel/CSI 51.32
51 TestAddons/parallel/Headlamp 22.3
52 TestAddons/parallel/CloudSpanner 6.59
53 TestAddons/parallel/LocalPath 58.99
54 TestAddons/parallel/NvidiaDevicePlugin 6.82
55 TestAddons/parallel/Yakd 11.17
57 TestAddons/StoppedEnableDisable 86.37
58 TestCertOptions 41.82
59 TestCertExpiration 625.77
61 TestForceSystemdFlag 80.82
62 TestForceSystemdEnv 58.23
67 TestErrorSpam/setup 39.23
68 TestErrorSpam/start 0.36
69 TestErrorSpam/status 0.69
70 TestErrorSpam/pause 1.57
71 TestErrorSpam/unpause 1.8
72 TestErrorSpam/stop 74.72
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 48.98
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 51.15
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
84 TestFunctional/serial/CacheCmd/cache/add_local 2.16
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.2
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
92 TestFunctional/serial/ExtraConfig 53.22
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.4
95 TestFunctional/serial/LogsFileCmd 1.33
96 TestFunctional/serial/InvalidService 4.18
98 TestFunctional/parallel/ConfigCmd 0.44
99 TestFunctional/parallel/DashboardCmd 17.99
100 TestFunctional/parallel/DryRun 0.25
101 TestFunctional/parallel/InternationalLanguage 0.15
102 TestFunctional/parallel/StatusCmd 1.23
106 TestFunctional/parallel/ServiceCmdConnect 21.51
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 45.46
110 TestFunctional/parallel/SSHCmd 0.35
111 TestFunctional/parallel/CpCmd 1.33
112 TestFunctional/parallel/MySQL 21.74
113 TestFunctional/parallel/FileSync 0.21
114 TestFunctional/parallel/CertSync 1.25
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
122 TestFunctional/parallel/License 0.36
123 TestFunctional/parallel/Version/short 0.07
124 TestFunctional/parallel/Version/components 0.52
125 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
126 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
127 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
128 TestFunctional/parallel/ServiceCmd/DeployApp 21.23
138 TestFunctional/parallel/ServiceCmd/List 0.47
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
141 TestFunctional/parallel/ServiceCmd/Format 0.36
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
143 TestFunctional/parallel/MountCmd/any-port 8.38
144 TestFunctional/parallel/ServiceCmd/URL 0.42
145 TestFunctional/parallel/ProfileCmd/profile_list 0.35
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
151 TestFunctional/parallel/ImageCommands/ImageBuild 6.41
152 TestFunctional/parallel/ImageCommands/Setup 1.97
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.71
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.61
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.8
158 TestFunctional/parallel/MountCmd/specific-port 1.51
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.15
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.81
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.14
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 80.91
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 44.29
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.07
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.45
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.1
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.19
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.62
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.14
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.2
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.23
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.52
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.46
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.26
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.13
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.7
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.31
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.23
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.19
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.23
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.41
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.42
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.09
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.33
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.31
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.33
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.41
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.43
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.2
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.19
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.19
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.2
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.74
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.83
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.65
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.61
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.9
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.5
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.47
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.74
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.54
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.56
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.24
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 208.98
262 TestMultiControlPlane/serial/DeployApp 7.97
263 TestMultiControlPlane/serial/PingHostFromPods 1.32
264 TestMultiControlPlane/serial/AddWorkerNode 48.49
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
267 TestMultiControlPlane/serial/CopyFile 11.04
268 TestMultiControlPlane/serial/StopSecondaryNode 82.46
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
270 TestMultiControlPlane/serial/RestartSecondaryNode 40.01
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 368.08
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.96
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 242.63
276 TestMultiControlPlane/serial/RestartCluster 93.7
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
278 TestMultiControlPlane/serial/AddSecondaryNode 103.33
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.71
284 TestJSONOutput/start/Command 74.09
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.73
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.63
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.18
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.24
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 77.34
316 TestMountStart/serial/StartWithMountFirst 22.27
317 TestMountStart/serial/VerifyMountFirst 0.3
318 TestMountStart/serial/StartWithMountSecond 19.34
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.69
321 TestMountStart/serial/VerifyMountPostDelete 0.3
322 TestMountStart/serial/Stop 1.31
323 TestMountStart/serial/RestartStopped 18.8
324 TestMountStart/serial/VerifyMountPostStop 0.32
327 TestMultiNode/serial/FreshStart2Nodes 130.91
328 TestMultiNode/serial/DeployApp2Nodes 6.16
329 TestMultiNode/serial/PingHostFrom2Pods 0.93
330 TestMultiNode/serial/AddNode 41.91
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.47
333 TestMultiNode/serial/CopyFile 6.14
334 TestMultiNode/serial/StopNode 2.2
335 TestMultiNode/serial/StartAfterStop 41.92
336 TestMultiNode/serial/RestartKeepsNodes 336.13
337 TestMultiNode/serial/DeleteNode 2.58
338 TestMultiNode/serial/StopMultiNode 158.79
339 TestMultiNode/serial/RestartMultiNode 84.58
340 TestMultiNode/serial/ValidateNameConflict 38.22
347 TestScheduledStopUnix 107.22
351 TestRunningBinaryUpgrade 406.45
353 TestKubernetesUpgrade 148.93
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
364 TestNoKubernetes/serial/StartWithK8s 76.84
365 TestStoppedBinaryUpgrade/Setup 3.25
366 TestStoppedBinaryUpgrade/Upgrade 106.71
367 TestNoKubernetes/serial/StartWithStopK8s 42.98
368 TestNoKubernetes/serial/Start 31.2
369 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
370 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
371 TestNoKubernetes/serial/ProfileList 29.46
372 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
373 TestNoKubernetes/serial/Stop 1.44
377 TestNoKubernetes/serial/StartNoArgs 21.56
382 TestNetworkPlugins/group/false 4.81
386 TestISOImage/Setup 26.34
387 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
389 TestPause/serial/Start 90.89
391 TestISOImage/Binaries/crictl 0.21
392 TestISOImage/Binaries/curl 0.18
393 TestISOImage/Binaries/docker 0.19
394 TestISOImage/Binaries/git 0.2
395 TestISOImage/Binaries/iptables 0.21
396 TestISOImage/Binaries/podman 0.2
397 TestISOImage/Binaries/rsync 0.2
398 TestISOImage/Binaries/socat 0.2
399 TestISOImage/Binaries/wget 0.2
400 TestISOImage/Binaries/VBoxControl 0.19
401 TestISOImage/Binaries/VBoxService 0.2
404 TestStartStop/group/old-k8s-version/serial/FirstStart 58.2
406 TestStartStop/group/no-preload/serial/FirstStart 97.89
407 TestStartStop/group/old-k8s-version/serial/DeployApp 11.42
409 TestStartStop/group/embed-certs/serial/FirstStart 84.77
410 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
411 TestStartStop/group/old-k8s-version/serial/Stop 87.58
412 TestStartStop/group/no-preload/serial/DeployApp 11.31
413 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
414 TestStartStop/group/no-preload/serial/Stop 87.98
415 TestStartStop/group/embed-certs/serial/DeployApp 10.3
416 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.09
417 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
418 TestStartStop/group/old-k8s-version/serial/SecondStart 45.44
419 TestStartStop/group/embed-certs/serial/Stop 74.55
420 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 10.01
421 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
422 TestStartStop/group/no-preload/serial/SecondStart 52.31
423 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
424 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
425 TestStartStop/group/old-k8s-version/serial/Pause 2.8
427 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.96
428 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
429 TestStartStop/group/embed-certs/serial/SecondStart 52.76
430 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.01
431 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
432 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.33
433 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
434 TestStartStop/group/no-preload/serial/Pause 2.86
435 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
437 TestStartStop/group/newest-cni/serial/FirstStart 45.21
438 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.68
439 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.12
440 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
441 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
442 TestStartStop/group/embed-certs/serial/Pause 2.6
443 TestNetworkPlugins/group/auto/Start 86.98
444 TestStartStop/group/newest-cni/serial/DeployApp 0
445 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.06
446 TestStartStop/group/newest-cni/serial/Stop 7.82
447 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
448 TestStartStop/group/newest-cni/serial/SecondStart 35.19
449 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
450 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
451 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
452 TestStartStop/group/newest-cni/serial/Pause 3.83
453 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
454 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.29
455 TestNetworkPlugins/group/kindnet/Start 116.58
456 TestNetworkPlugins/group/auto/KubeletFlags 0.2
457 TestNetworkPlugins/group/auto/NetCatPod 11.25
458 TestNetworkPlugins/group/auto/DNS 0.19
459 TestNetworkPlugins/group/auto/Localhost 0.15
460 TestNetworkPlugins/group/auto/HairPin 0.13
461 TestNetworkPlugins/group/calico/Start 82.6
462 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.01
463 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
464 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
465 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.06
466 TestNetworkPlugins/group/custom-flannel/Start 77.06
467 TestNetworkPlugins/group/enable-default-cni/Start 83.6
468 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/ControllerPod 6.01
470 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
471 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
472 TestNetworkPlugins/group/calico/KubeletFlags 0.21
473 TestNetworkPlugins/group/calico/NetCatPod 10.28
474 TestNetworkPlugins/group/kindnet/DNS 0.15
475 TestNetworkPlugins/group/kindnet/Localhost 0.15
476 TestNetworkPlugins/group/kindnet/HairPin 0.16
477 TestNetworkPlugins/group/calico/DNS 0.17
478 TestNetworkPlugins/group/calico/Localhost 0.14
479 TestNetworkPlugins/group/calico/HairPin 0.14
480 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
481 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
482 TestNetworkPlugins/group/flannel/Start 77.11
483 TestNetworkPlugins/group/bridge/Start 96.73
484 TestNetworkPlugins/group/custom-flannel/DNS 0.16
485 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
486 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
488 TestISOImage/PersistentMounts//data 0.21
489 TestISOImage/PersistentMounts//var/lib/docker 0.18
490 TestISOImage/PersistentMounts//var/lib/cni 0.18
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
492 TestISOImage/PersistentMounts//var/lib/minikube 0.18
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.19
495 TestISOImage/VersionJSON 0.18
496 TestISOImage/eBPFSupport 0.17
497 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
498 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
499 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
500 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
501 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
504 TestNetworkPlugins/group/flannel/NetCatPod 11.23
505 TestNetworkPlugins/group/flannel/DNS 0.14
506 TestNetworkPlugins/group/flannel/Localhost 0.12
507 TestNetworkPlugins/group/flannel/HairPin 0.13
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.19
509 TestNetworkPlugins/group/bridge/NetCatPod 10.24
510 TestNetworkPlugins/group/bridge/DNS 0.14
511 TestNetworkPlugins/group/bridge/Localhost 0.12
512 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.28.0/json-events (22.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-397425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-397425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.307499446s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 09:11:58.938420  396534 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1206 09:11:58.938518  396534 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-397425
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-397425: exit status 85 (76.081953ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-397425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-397425 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:11:36
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:11:36.689367  396546 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:36.689503  396546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:36.689515  396546 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:36.689523  396546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:36.689749  396546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	W1206 09:11:36.689942  396546 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22047-392561/.minikube/config/config.json: open /home/jenkins/minikube-integration/22047-392561/.minikube/config/config.json: no such file or directory
	I1206 09:11:36.690424  396546 out.go:368] Setting JSON to true
	I1206 09:11:36.691461  396546 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3237,"bootTime":1765009060,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:11:36.691527  396546 start.go:143] virtualization: kvm guest
	I1206 09:11:36.695091  396546 out.go:99] [download-only-397425] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1206 09:11:36.695255  396546 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 09:11:36.695327  396546 notify.go:221] Checking for updates...
	I1206 09:11:36.696619  396546 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:11:36.698309  396546 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:11:36.699889  396546 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:11:36.701272  396546 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:11:36.702913  396546 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:11:36.705274  396546 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:11:36.705638  396546 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:11:36.737240  396546 out.go:99] Using the kvm2 driver based on user configuration
	I1206 09:11:36.737300  396546 start.go:309] selected driver: kvm2
	I1206 09:11:36.737311  396546 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:11:36.737651  396546 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:11:36.738199  396546 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 09:11:36.738354  396546 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:11:36.738384  396546 cni.go:84] Creating CNI manager for ""
	I1206 09:11:36.738429  396546 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:11:36.738441  396546 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:11:36.738499  396546 start.go:353] cluster config:
	{Name:download-only-397425 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-397425 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:36.738698  396546 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:11:36.740238  396546 out.go:99] Downloading VM boot image ...
	I1206 09:11:36.740271  396546 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22047-392561/.minikube/cache/iso/amd64/minikube-v1.37.0-1764843329-22032-amd64.iso
	I1206 09:11:47.186439  396546 out.go:99] Starting "download-only-397425" primary control-plane node in "download-only-397425" cluster
	I1206 09:11:47.186490  396546 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:11:47.281636  396546 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:11:47.281685  396546 cache.go:65] Caching tarball of preloaded images
	I1206 09:11:47.281895  396546 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 09:11:47.283804  396546 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 09:11:47.283833  396546 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 09:11:47.381001  396546 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1206 09:11:47.381139  396546 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-397425 host does not exist
	  To start a cluster, run: "minikube start -p download-only-397425"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-397425
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-660787 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-660787 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.490397246s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 09:12:08.817500  396534 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 09:12:08.817540  396534 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-660787
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-660787: exit status 85 (81.257233ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-397425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-397425 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ delete  │ -p download-only-397425                                                                                                                                                 │ download-only-397425 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ start   │ -o=json --download-only -p download-only-660787 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-660787 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:11:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:11:59.382099  396776 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:11:59.382218  396776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:59.382230  396776 out.go:374] Setting ErrFile to fd 2...
	I1206 09:11:59.382237  396776 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:11:59.382429  396776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:11:59.382972  396776 out.go:368] Setting JSON to true
	I1206 09:11:59.383908  396776 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3259,"bootTime":1765009060,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:11:59.383973  396776 start.go:143] virtualization: kvm guest
	I1206 09:11:59.386029  396776 out.go:99] [download-only-660787] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:11:59.386250  396776 notify.go:221] Checking for updates...
	I1206 09:11:59.387603  396776 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:11:59.389154  396776 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:11:59.390444  396776 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:11:59.391818  396776 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:11:59.393341  396776 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:11:59.395936  396776 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:11:59.396210  396776 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:11:59.427633  396776 out.go:99] Using the kvm2 driver based on user configuration
	I1206 09:11:59.427694  396776 start.go:309] selected driver: kvm2
	I1206 09:11:59.427701  396776 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:11:59.428034  396776 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:11:59.428579  396776 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 09:11:59.428731  396776 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:11:59.428761  396776 cni.go:84] Creating CNI manager for ""
	I1206 09:11:59.428814  396776 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:11:59.428824  396776 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:11:59.428893  396776 start.go:353] cluster config:
	{Name:download-only-660787 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-660787 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:11:59.429023  396776 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:11:59.430485  396776 out.go:99] Starting "download-only-660787" primary control-plane node in "download-only-660787" cluster
	I1206 09:11:59.430517  396776 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:59.881438  396776 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1206 09:11:59.881486  396776 cache.go:65] Caching tarball of preloaded images
	I1206 09:11:59.881677  396776 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 09:11:59.883616  396776 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1206 09:11:59.883634  396776 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 09:11:59.977991  396776 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1206 09:11:59.978069  396776 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-660787 host does not exist
	  To start a cluster, run: "minikube start -p download-only-660787"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-660787
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (10.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-548578 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-548578 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (10.333876895s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (10.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 09:12:19.547819  396534 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1206 09:12:19.547869  396534 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-548578
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-548578: exit status 85 (129.078501ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-397425 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-397425 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ delete  │ -p download-only-397425                                                                                                                                                        │ download-only-397425 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │ 06 Dec 25 09:11 UTC │
	│ start   │ -o=json --download-only -p download-only-660787 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-660787 │ jenkins │ v1.37.0 │ 06 Dec 25 09:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ delete  │ -p download-only-660787                                                                                                                                                        │ download-only-660787 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │ 06 Dec 25 09:12 UTC │
	│ start   │ -o=json --download-only -p download-only-548578 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-548578 │ jenkins │ v1.37.0 │ 06 Dec 25 09:12 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:12:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:12:09.268322  396966 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:12:09.268566  396966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:09.268574  396966 out.go:374] Setting ErrFile to fd 2...
	I1206 09:12:09.268578  396966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:12:09.268790  396966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:12:09.269307  396966 out.go:368] Setting JSON to true
	I1206 09:12:09.270263  396966 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3269,"bootTime":1765009060,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:12:09.270321  396966 start.go:143] virtualization: kvm guest
	I1206 09:12:09.272246  396966 out.go:99] [download-only-548578] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:12:09.272500  396966 notify.go:221] Checking for updates...
	I1206 09:12:09.273612  396966 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:12:09.274980  396966 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:12:09.276426  396966 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:12:09.277589  396966 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:12:09.278726  396966 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:12:09.281256  396966 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:12:09.281600  396966 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:12:09.312111  396966 out.go:99] Using the kvm2 driver based on user configuration
	I1206 09:12:09.312151  396966 start.go:309] selected driver: kvm2
	I1206 09:12:09.312158  396966 start.go:927] validating driver "kvm2" against <nil>
	I1206 09:12:09.312470  396966 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:12:09.312930  396966 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1206 09:12:09.313067  396966 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:12:09.313093  396966 cni.go:84] Creating CNI manager for ""
	I1206 09:12:09.313137  396966 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1206 09:12:09.313146  396966 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:12:09.313188  396966 start.go:353] cluster config:
	{Name:download-only-548578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-548578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:12:09.313278  396966 iso.go:125] acquiring lock: {Name:mkf36bf2c9901302dc74c7ac02d02007e6a978f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:12:09.314686  396966 out.go:99] Starting "download-only-548578" primary control-plane node in "download-only-548578" cluster
	I1206 09:12:09.314721  396966 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:12:09.770369  396966 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:09.770402  396966 cache.go:65] Caching tarball of preloaded images
	I1206 09:12:09.770595  396966 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:12:09.772587  396966 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1206 09:12:09.772617  396966 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1206 09:12:09.869494  396966 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1206 09:12:09.869566  396966 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22047-392561/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1206 09:12:18.591727  396966 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 09:12:18.592170  396966 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/download-only-548578/config.json ...
	I1206 09:12:18.592214  396966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/download-only-548578/config.json: {Name:mk07a6c0cc181b2db2df0b43c1c19a744ed0bdd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:12:18.592388  396966 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 09:12:18.592559  396966 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/22047-392561/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl
	
	
	* The control-plane node download-only-548578 host does not exist
	  To start a cluster, run: "minikube start -p download-only-548578"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-548578
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 09:12:20.465606  396534 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-961783 --alsologtostderr --binary-mirror http://127.0.0.1:35409 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-961783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-961783
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (69.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-823832 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-823832 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.114000295s)
helpers_test.go:175: Cleaning up "offline-crio-823832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-823832
--- PASS: TestOffline (69.02s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-774690
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-774690: exit status 85 (73.963948ms)

                                                
                                                
-- stdout --
	* Profile "addons-774690" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-774690"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-774690
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-774690: exit status 85 (74.81804ms)

                                                
                                                
-- stdout --
	* Profile "addons-774690" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-774690"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (127.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-774690 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-774690 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.125363938s)
--- PASS: TestAddons/Setup (127.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-774690 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-774690 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-774690 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-774690 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ccc10db2-3a00-4383-80ab-805fd3af8161] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ccc10db2-3a00-4383-80ab-805fd3af8161] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.004199865s
addons_test.go:694: (dbg) Run:  kubectl --context addons-774690 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-774690 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-774690 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 12.453883ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-4gkjr" [0b1de7e3-a280-4a46-a545-e46a47e746b0] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006286027s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-t6flj" [50457566-2e31-43a8-9fba-b01c71f057b8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006815309s
addons_test.go:392: (dbg) Run:  kubectl --context addons-774690 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-774690 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-774690 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.860896535s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 ip
2025/12/06 09:15:06 [DEBUG] GET http://192.168.39.249:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.35509ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-774690
addons_test.go:332: (dbg) Run:  kubectl --context addons-774690 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-98ffj" [e12b4586-37b9-4e6e-a02a-d54ae6f0dd62] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00380935s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable inspektor-gadget --alsologtostderr -v=1: (5.704018928s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.822901ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-clrcl" [34e1f363-ac29-415d-89c3-bfe4ac513e1f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005758797s
addons_test.go:463: (dbg) Run:  kubectl --context addons-774690 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 09:15:11.015524  396534 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 09:15:11.022037  396534 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 09:15:11.022075  396534 kapi.go:107] duration metric: took 6.590092ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.603372ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-774690 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-774690 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c67d4519-6ecc-4c61-bded-a51cc156439a] Pending
helpers_test.go:352: "task-pv-pod" [c67d4519-6ecc-4c61-bded-a51cc156439a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c67d4519-6ecc-4c61-bded-a51cc156439a] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004663043s
addons_test.go:572: (dbg) Run:  kubectl --context addons-774690 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-774690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-774690 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-774690 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-774690 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-774690 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-774690 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1e2c1934-94c4-4239-ab6d-3ae61f07f765] Pending
helpers_test.go:352: "task-pv-pod-restore" [1e2c1934-94c4-4239-ab6d-3ae61f07f765] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1e2c1934-94c4-4239-ab6d-3ae61f07f765] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004417991s
addons_test.go:614: (dbg) Run:  kubectl --context addons-774690 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-774690 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-774690 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.926740089s)
--- PASS: TestAddons/parallel/CSI (51.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-774690 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-774690 --alsologtostderr -v=1: (1.06765563s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-jl5md" [13f58d00-6587-4eb4-a6f5-87a896b7984a] Pending
helpers_test.go:352: "headlamp-dfcdc64b-jl5md" [13f58d00-6587-4eb4-a6f5-87a896b7984a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-jl5md" [13f58d00-6587-4eb4-a6f5-87a896b7984a] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.010111745s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable headlamp --alsologtostderr -v=1: (6.221061272s)
--- PASS: TestAddons/parallel/Headlamp (22.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-x4dpp" [a0f0a890-3709-4e4d-b5cb-3775df11f009] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004962985s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-774690 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-774690 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b8798110-1ff9-4d1a-8540-09f2102de1d7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b8798110-1ff9-4d1a-8540-09f2102de1d7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b8798110-1ff9-4d1a-8540-09f2102de1d7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.005204176s
addons_test.go:967: (dbg) Run:  kubectl --context addons-774690 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 ssh "cat /opt/local-path-provisioner/pvc-6faf3b95-bd02-4761-afb7-95d974158c7c_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-774690 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-774690 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.126690215s)
--- PASS: TestAddons/parallel/LocalPath (58.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vdltq" [6bd89c20-b241-4230-9f16-b5904f3e8fd6] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006773464s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-v6752" [ff7d6212-0b2f-4ff0-9884-40ef2f15d440] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004467578s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-774690 addons disable yakd --alsologtostderr -v=1: (6.165336424s)
--- PASS: TestAddons/parallel/Yakd (11.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (86.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-774690
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-774690: (1m26.140358543s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-774690
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-774690
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-774690
--- PASS: TestAddons/StoppedEnableDisable (86.37s)

                                                
                                    
x
+
TestCertOptions (41.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-322688 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-322688 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (40.466194016s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-322688 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-322688 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-322688 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-322688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-322688
--- PASS: TestCertOptions (41.82s)

                                                
                                    
x
+
TestCertExpiration (625.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-694719 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1206 10:34:28.983297  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-694719 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m11.010860393s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-694719 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-694719 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (6m13.854181502s)
helpers_test.go:175: Cleaning up "cert-expiration-694719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-694719
--- PASS: TestCertExpiration (625.77s)

                                                
                                    
x
+
TestForceSystemdFlag (80.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-524307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1206 10:35:27.622526  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-524307 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m19.794310724s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-524307 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-524307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-524307
--- PASS: TestForceSystemdFlag (80.82s)

                                                
                                    
x
+
TestForceSystemdEnv (58.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-294790 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-294790 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.344621238s)
helpers_test.go:175: Cleaning up "force-systemd-env-294790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-294790
--- PASS: TestForceSystemdEnv (58.23s)

                                                
                                    
x
+
TestErrorSpam/setup (39.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-938380 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-938380 --driver=kvm2  --container-runtime=crio
E1206 09:19:28.984815  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:28.991328  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:29.002862  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:29.024409  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:29.065967  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:29.147733  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:29.309526  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:29.631364  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:30.272994  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:31.554513  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:34.117465  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:39.238901  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:19:49.480639  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-938380 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-938380 --driver=kvm2  --container-runtime=crio: (39.227803684s)
--- PASS: TestErrorSpam/setup (39.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (74.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 stop
E1206 09:20:09.963011  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:20:50.925641  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 stop: (1m11.670170355s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 stop: (1.991944587s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-938380 --log_dir /tmp/nospam-938380 stop: (1.054908218s)
--- PASS: TestErrorSpam/stop (74.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310626 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-310626 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (48.983146309s)
--- PASS: TestFunctional/serial/StartWithProxy (48.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 09:22:04.608735  396534 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310626 --alsologtostderr -v=8
E1206 09:22:12.847151  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-310626 --alsologtostderr -v=8: (51.150237205s)
functional_test.go:678: soft start took 51.1510603s for "functional-310626" cluster.
I1206 09:22:55.759539  396534 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (51.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-310626 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 cache add registry.k8s.io/pause:3.1: (1.076333012s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 cache add registry.k8s.io/pause:3.3: (1.147137878s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 cache add registry.k8s.io/pause:latest: (1.12162599s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-310626 /tmp/TestFunctionalserialCacheCmdcacheadd_local3332051967/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cache add minikube-local-cache-test:functional-310626
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 cache add minikube-local-cache-test:functional-310626: (1.794605171s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cache delete minikube-local-cache-test:functional-310626
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-310626
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (183.59386ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 cache reload: (1.008458128s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 kubectl -- --context functional-310626 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-310626 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310626 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-310626 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.216793549s)
functional_test.go:776: restart took 53.216939869s for "functional-310626" cluster.
I1206 09:23:56.925788  396534 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (53.22s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-310626 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 logs: (1.39464168s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 logs --file /tmp/TestFunctionalserialLogsFileCmd1578062698/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 logs --file /tmp/TestFunctionalserialLogsFileCmd1578062698/001/logs.txt: (1.326588082s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-310626 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-310626
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-310626: exit status 115 (250.031012ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.32:32704 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-310626 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 config get cpus: exit status 14 (66.196139ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 config get cpus: exit status 14 (71.766684ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-310626 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-310626 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 403312: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310626 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-310626 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (125.250311ms)

                                                
                                                
-- stdout --
	* [functional-310626] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:24:27.897289  403146 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:24:27.897573  403146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:24:27.897584  403146 out.go:374] Setting ErrFile to fd 2...
	I1206 09:24:27.897589  403146 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:24:27.897839  403146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:24:27.898351  403146 out.go:368] Setting JSON to false
	I1206 09:24:27.899378  403146 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4008,"bootTime":1765009060,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:24:27.899438  403146 start.go:143] virtualization: kvm guest
	I1206 09:24:27.901949  403146 out.go:179] * [functional-310626] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:24:27.903464  403146 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:24:27.903473  403146 notify.go:221] Checking for updates...
	I1206 09:24:27.906206  403146 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:24:27.907513  403146 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:24:27.908972  403146 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:24:27.910630  403146 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:24:27.912152  403146 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:24:27.914141  403146 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:24:27.914684  403146 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:24:27.949069  403146 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:24:27.950267  403146 start.go:309] selected driver: kvm2
	I1206 09:24:27.950288  403146 start.go:927] validating driver "kvm2" against &{Name:functional-310626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-310626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:24:27.950443  403146 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:24:27.952771  403146 out.go:203] 
	W1206 09:24:27.954121  403146 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:24:27.955221  403146 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310626 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-310626 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-310626 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (146.601079ms)

                                                
                                                
-- stdout --
	* [functional-310626] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:24:28.167975  403208 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:24:28.168102  403208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:24:28.168116  403208 out.go:374] Setting ErrFile to fd 2...
	I1206 09:24:28.168123  403208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:24:28.168424  403208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:24:28.168937  403208 out.go:368] Setting JSON to false
	I1206 09:24:28.169913  403208 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4008,"bootTime":1765009060,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:24:28.169996  403208 start.go:143] virtualization: kvm guest
	I1206 09:24:28.172076  403208 out.go:179] * [functional-310626] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:24:28.173558  403208 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:24:28.173590  403208 notify.go:221] Checking for updates...
	I1206 09:24:28.176168  403208 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:24:28.177678  403208 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:24:28.179214  403208 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:24:28.180648  403208 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:24:28.182155  403208 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:24:28.184114  403208 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:24:28.184637  403208 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:24:28.222038  403208 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:24:28.223890  403208 start.go:309] selected driver: kvm2
	I1206 09:24:28.223907  403208 start.go:927] validating driver "kvm2" against &{Name:functional-310626 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-310626 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.32 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:24:28.224056  403208 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:24:28.226936  403208 out.go:203] 
	W1206 09:24:28.228464  403208 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:24:28.229954  403208 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-310626 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-310626 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-rqlnw" [5be3f962-f91b-49be-9c96-9ba95065547c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-rqlnw" [5be3f962-f91b-49be-9c96-9ba95065547c] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.007008527s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.32:31327
functional_test.go:1680: http://192.168.39.32:31327: success! body:
Request served by hello-node-connect-7d85dfc575-rqlnw

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.32:31327
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d006a183-cf89-4cb2-90ae-c19d44cb582c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006739342s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-310626 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-310626 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-310626 get pvc myclaim -o=json
I1206 09:24:10.899061  396534 retry.go:31] will retry after 2.364714189s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:e058c762-e67e-4704-89a4-5c0a382e34a4 ResourceVersion:794 Generation:0 CreationTimestamp:2025-12-06 09:24:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc000a85590 VolumeMode:0xc000a855a0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-310626 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-310626 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:24:13.618087  396534 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f2052aee-0f90-426a-b12f-f23c84ae24de] Pending
helpers_test.go:352: "sp-pod" [f2052aee-0f90-426a-b12f-f23c84ae24de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f2052aee-0f90-426a-b12f-f23c84ae24de] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004315272s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-310626 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-310626 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-310626 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:24:35.893875  396534 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4e3c1338-8c3d-4c39-b598-08ac4d960f3f] Pending
helpers_test.go:352: "sp-pod" [4e3c1338-8c3d-4c39-b598-08ac4d960f3f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4e3c1338-8c3d-4c39-b598-08ac4d960f3f] Running
2025/12/06 09:24:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.003723249s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-310626 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.46s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh -n functional-310626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cp functional-310626:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2547418907/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh -n functional-310626 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh -n functional-310626 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-310626 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-42qhh" [689a7f4e-9361-4efa-818d-608fb9f415dc] Pending
helpers_test.go:352: "mysql-5bb876957f-42qhh" [689a7f4e-9361-4efa-818d-608fb9f415dc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-42qhh" [689a7f4e-9361-4efa-818d-608fb9f415dc] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.005321105s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-310626 exec mysql-5bb876957f-42qhh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-310626 exec mysql-5bb876957f-42qhh -- mysql -ppassword -e "show databases;": exit status 1 (174.602595ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:24:23.726433  396534 retry.go:31] will retry after 889.287668ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-310626 exec mysql-5bb876957f-42qhh -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-310626 exec mysql-5bb876957f-42qhh -- mysql -ppassword -e "show databases;": exit status 1 (162.263602ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1206 09:24:24.778636  396534 retry.go:31] will retry after 1.081752786s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-310626 exec mysql-5bb876957f-42qhh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.74s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/396534/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /etc/test/nested/copy/396534/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/396534.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /etc/ssl/certs/396534.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/396534.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /usr/share/ca-certificates/396534.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3965342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /etc/ssl/certs/3965342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3965342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /usr/share/ca-certificates/3965342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-310626 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh "sudo systemctl is-active docker": exit status 1 (206.24784ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh "sudo systemctl is-active containerd": exit status 1 (187.863861ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-310626 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-310626 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-576vg" [e8c4ef58-575f-4bb5-b19b-9a4e262c9f1a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-576vg" [e8c4ef58-575f-4bb5-b19b-9a4e262c9f1a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.004353296s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 service list -o json
functional_test.go:1504: Took "500.954097ms" to run "out/minikube-linux-amd64 -p functional-310626 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.32:30748
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdany-port3050047659/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765013067304252861" to /tmp/TestFunctionalparallelMountCmdany-port3050047659/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765013067304252861" to /tmp/TestFunctionalparallelMountCmdany-port3050047659/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765013067304252861" to /tmp/TestFunctionalparallelMountCmdany-port3050047659/001/test-1765013067304252861
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (224.843414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:24:27.529464  396534 retry.go:31] will retry after 531.125689ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:24 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:24 test-1765013067304252861
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh cat /mount-9p/test-1765013067304252861
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-310626 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [00656170-402c-4702-ad88-c78511abe4f5] Pending
E1206 09:24:28.983498  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [00656170-402c-4702-ad88-c78511abe4f5] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [00656170-402c-4702-ad88-c78511abe4f5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [00656170-402c-4702-ad88-c78511abe4f5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005758235s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-310626 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdany-port3050047659/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.32:30748
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "270.711544ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "81.110413ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "302.426081ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.478139ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310626 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-310626
localhost/kicbase/echo-server:functional-310626
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310626 image ls --format short --alsologtostderr:
I1206 09:24:38.474954  403818 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:38.475097  403818 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.475107  403818 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:38.475112  403818 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.475450  403818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:24:38.476208  403818 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.476369  403818 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.479183  403818 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:38.482381  403818 main.go:143] libmachine: domain functional-310626 has defined MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.482918  403818 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d4:b8:e4", ip: ""} in network mk-functional-310626: {Iface:virbr1 ExpiryTime:2025-12-06 10:21:30 +0000 UTC Type:0 Mac:52:54:00:d4:b8:e4 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-310626 Clientid:01:52:54:00:d4:b8:e4}
I1206 09:24:38.482972  403818 main.go:143] libmachine: domain functional-310626 has defined IP address 192.168.39.32 and MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.483137  403818 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-310626/id_rsa Username:docker}
I1206 09:24:38.568394  403818 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310626 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-310626  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-310626  │ ed4b49a58d488 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310626 image ls --format table --alsologtostderr:
I1206 09:24:39.006224  403876 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:39.006545  403876 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:39.006559  403876 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:39.006564  403876 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:39.006763  403876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:24:39.007367  403876 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:39.007466  403876 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:39.009938  403876 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:39.013167  403876 main.go:143] libmachine: domain functional-310626 has defined MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:39.013622  403876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d4:b8:e4", ip: ""} in network mk-functional-310626: {Iface:virbr1 ExpiryTime:2025-12-06 10:21:30 +0000 UTC Type:0 Mac:52:54:00:d4:b8:e4 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-310626 Clientid:01:52:54:00:d4:b8:e4}
I1206 09:24:39.013649  403876 main.go:143] libmachine: domain functional-310626 has defined IP address 192.168.39.32 and MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:39.013816  403876 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-310626/id_rsa Username:docker}
I1206 09:24:39.125641  403876 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310626 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha2
56:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-310626"],"size":"4945146"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiser
ver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"da
86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e5324
5023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"s
ize":"53848919"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807
e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ed4b49a58d488673473d26f0f483c7997f5e468592b00b9f44654546ee3566a3","repoDigests":["localhost/minikube-local-cache-test@sha256:1004376867f4210d013c9b45b77e26a20b982d126770ccecfc8421d2fc49b257"],"repoTags":["localhost/minikube-local-cache-test:functional-310626"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310626 image ls --format json --alsologtostderr:
I1206 09:24:38.751902  403855 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:38.752233  403855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.752247  403855 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:38.752251  403855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.752463  403855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:24:38.753043  403855 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.753145  403855 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.756053  403855 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:38.759254  403855 main.go:143] libmachine: domain functional-310626 has defined MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.759762  403855 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d4:b8:e4", ip: ""} in network mk-functional-310626: {Iface:virbr1 ExpiryTime:2025-12-06 10:21:30 +0000 UTC Type:0 Mac:52:54:00:d4:b8:e4 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-310626 Clientid:01:52:54:00:d4:b8:e4}
I1206 09:24:38.759797  403855 main.go:143] libmachine: domain functional-310626 has defined IP address 192.168.39.32 and MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.759980  403855 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-310626/id_rsa Username:docker}
I1206 09:24:38.862984  403855 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310626 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ed4b49a58d488673473d26f0f483c7997f5e468592b00b9f44654546ee3566a3
repoDigests:
- localhost/minikube-local-cache-test@sha256:1004376867f4210d013c9b45b77e26a20b982d126770ccecfc8421d2fc49b257
repoTags:
- localhost/minikube-local-cache-test:functional-310626
size: "3330"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-310626
size: "4945146"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310626 image ls --format yaml --alsologtostderr:
I1206 09:24:38.536354  403834 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:38.536644  403834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.536658  403834 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:38.536664  403834 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.536974  403834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:24:38.537849  403834 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.537959  403834 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.540382  403834 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:38.542977  403834 main.go:143] libmachine: domain functional-310626 has defined MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.543425  403834 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d4:b8:e4", ip: ""} in network mk-functional-310626: {Iface:virbr1 ExpiryTime:2025-12-06 10:21:30 +0000 UTC Type:0 Mac:52:54:00:d4:b8:e4 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-310626 Clientid:01:52:54:00:d4:b8:e4}
I1206 09:24:38.543454  403834 main.go:143] libmachine: domain functional-310626 has defined IP address 192.168.39.32 and MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.543635  403834 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-310626/id_rsa Username:docker}
I1206 09:24:38.633562  403834 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh pgrep buildkitd: exit status 1 (199.534724ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr: (6.002124616s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c80432cb298
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-310626
--> 839bd146605
Successfully tagged localhost/my-image:functional-310626
839bd14660512cfc802f46ba294c3fb2851e1a2dc31bbee4962dceeb70d92b8e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-310626 image build -t localhost/my-image:functional-310626 testdata/build --alsologtostderr:
I1206 09:24:38.885657  403865 out.go:360] Setting OutFile to fd 1 ...
I1206 09:24:38.885983  403865 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.885995  403865 out.go:374] Setting ErrFile to fd 2...
I1206 09:24:38.886001  403865 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:24:38.886199  403865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:24:38.886808  403865 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.887510  403865 config.go:182] Loaded profile config "functional-310626": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 09:24:38.889993  403865 ssh_runner.go:195] Run: systemctl --version
I1206 09:24:38.892694  403865 main.go:143] libmachine: domain functional-310626 has defined MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.893197  403865 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d4:b8:e4", ip: ""} in network mk-functional-310626: {Iface:virbr1 ExpiryTime:2025-12-06 10:21:30 +0000 UTC Type:0 Mac:52:54:00:d4:b8:e4 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:functional-310626 Clientid:01:52:54:00:d4:b8:e4}
I1206 09:24:38.893232  403865 main.go:143] libmachine: domain functional-310626 has defined IP address 192.168.39.32 and MAC address 52:54:00:d4:b8:e4 in network mk-functional-310626
I1206 09:24:38.893414  403865 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-310626/id_rsa Username:docker}
I1206 09:24:39.026140  403865 build_images.go:162] Building image from path: /tmp/build.1880152739.tar
I1206 09:24:39.026253  403865 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:24:39.049437  403865 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1880152739.tar
I1206 09:24:39.056033  403865 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1880152739.tar: stat -c "%s %y" /var/lib/minikube/build/build.1880152739.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1880152739.tar': No such file or directory
I1206 09:24:39.056080  403865 ssh_runner.go:362] scp /tmp/build.1880152739.tar --> /var/lib/minikube/build/build.1880152739.tar (3072 bytes)
I1206 09:24:39.101609  403865 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1880152739
I1206 09:24:39.125662  403865 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1880152739 -xf /var/lib/minikube/build/build.1880152739.tar
I1206 09:24:39.148820  403865 crio.go:315] Building image: /var/lib/minikube/build/build.1880152739
I1206 09:24:39.148892  403865 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-310626 /var/lib/minikube/build/build.1880152739 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 09:24:44.784633  403865 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-310626 /var/lib/minikube/build/build.1880152739 --cgroup-manager=cgroupfs: (5.635690906s)
I1206 09:24:44.784732  403865 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1880152739
I1206 09:24:44.801014  403865 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1880152739.tar
I1206 09:24:44.815776  403865 build_images.go:218] Built localhost/my-image:functional-310626 from /tmp/build.1880152739.tar
I1206 09:24:44.815833  403865 build_images.go:134] succeeded building to: functional-310626
I1206 09:24:44.815845  403865 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.946495215s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-310626
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image load --daemon kicbase/echo-server:functional-310626 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-310626 image load --daemon kicbase/echo-server:functional-310626 --alsologtostderr: (1.02159195s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image load --daemon kicbase/echo-server:functional-310626 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-310626
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image load --daemon kicbase/echo-server:functional-310626 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image save kicbase/echo-server:functional-310626 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image rm kicbase/echo-server:functional-310626 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdspecific-port4183865668/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.363618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:24:35.941386  396534 retry.go:31] will retry after 420.616244ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdspecific-port4183865668/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh "sudo umount -f /mount-9p": exit status 1 (180.20542ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-310626 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdspecific-port4183865668/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-310626
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 image save --daemon kicbase/echo-server:functional-310626 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-310626
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3913373611/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3913373611/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3913373611/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T" /mount1: exit status 1 (198.555444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:24:37.389028  396534 retry.go:31] will retry after 347.784991ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-310626 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-310626 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3913373611/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3913373611/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-310626 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3913373611/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-310626
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-310626
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-310626
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-392561/.minikube/files/etc/test/nested/copy/396534/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (80.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959292 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 09:24:56.690700  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-959292 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m20.911644949s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (80.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (44.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 09:26:13.007654  396534 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959292 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-959292 --alsologtostderr -v=8: (44.28677021s)
functional_test.go:678: soft start took 44.287269499s for "functional-959292" cluster.
I1206 09:26:57.294996  396534 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (44.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-959292 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 cache add registry.k8s.io/pause:3.1: (1.108569988s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 cache add registry.k8s.io/pause:3.3: (1.192296292s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 cache add registry.k8s.io/pause:latest: (1.145269941s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach26151766/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cache add minikube-local-cache-test:functional-959292
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 cache add minikube-local-cache-test:functional-959292: (1.790301127s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cache delete minikube-local-cache-test:functional-959292
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-959292
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (192.915257ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 cache reload: (1.010814679s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 kubectl -- --context functional-959292 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-959292 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs: (1.20148927s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4123673547/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs4123673547/001/logs.txt: (1.228690013s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-959292 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-959292
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-959292: exit status 115 (225.44377ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.122:32268 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-959292 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 config get cpus: exit status 14 (68.520362ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 config get cpus: exit status 14 (73.454584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-959292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (123.471462ms)

                                                
                                                
-- stdout --
	* [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:43:05.046907  409344 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:43:05.047161  409344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:43:05.047170  409344 out.go:374] Setting ErrFile to fd 2...
	I1206 09:43:05.047174  409344 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:43:05.047349  409344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:43:05.047802  409344 out.go:368] Setting JSON to false
	I1206 09:43:05.048788  409344 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5125,"bootTime":1765009060,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:43:05.048842  409344 start.go:143] virtualization: kvm guest
	I1206 09:43:05.051188  409344 out.go:179] * [functional-959292] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:43:05.052576  409344 notify.go:221] Checking for updates...
	I1206 09:43:05.052607  409344 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:43:05.053852  409344 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:43:05.055472  409344 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:43:05.056945  409344 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:43:05.058311  409344 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:43:05.059699  409344 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:43:05.061340  409344 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:43:05.061872  409344 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:43:05.096652  409344 out.go:179] * Using the kvm2 driver based on existing profile
	I1206 09:43:05.097964  409344 start.go:309] selected driver: kvm2
	I1206 09:43:05.097980  409344 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:43:05.098084  409344 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:43:05.100375  409344 out.go:203] 
	W1206 09:43:05.102265  409344 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:43:05.103626  409344 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959292 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-959292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-959292 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (130.095827ms)

                                                
                                                
-- stdout --
	* [functional-959292] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:43:05.307895  409384 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:43:05.308005  409384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:43:05.308012  409384 out.go:374] Setting ErrFile to fd 2...
	I1206 09:43:05.308017  409384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:43:05.308294  409384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:43:05.308746  409384 out.go:368] Setting JSON to false
	I1206 09:43:05.309662  409384 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5125,"bootTime":1765009060,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:43:05.309739  409384 start.go:143] virtualization: kvm guest
	I1206 09:43:05.311877  409384 out.go:179] * [functional-959292] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:43:05.313466  409384 notify.go:221] Checking for updates...
	I1206 09:43:05.313471  409384 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:43:05.315144  409384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:43:05.316606  409384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 09:43:05.318072  409384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 09:43:05.319519  409384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:43:05.321149  409384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:43:05.323362  409384 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 09:43:05.324142  409384 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:43:05.358478  409384 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1206 09:43:05.359876  409384 start.go:309] selected driver: kvm2
	I1206 09:43:05.359891  409384 start.go:927] validating driver "kvm2" against &{Name:functional-959292 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-959292 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:43:05.360015  409384 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:43:05.362081  409384 out.go:203] 
	W1206 09:43:05.363570  409384 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:43:05.364961  409384 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh -n functional-959292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cp functional-959292:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3724761089/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh -n functional-959292 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh -n functional-959292 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/396534/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /etc/test/nested/copy/396534/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/396534.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /etc/ssl/certs/396534.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/396534.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /usr/share/ca-certificates/396534.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3965342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /etc/ssl/certs/3965342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3965342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /usr/share/ca-certificates/3965342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-959292 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "sudo systemctl is-active docker": exit status 1 (194.339194ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "sudo systemctl is-active containerd": exit status 1 (212.736281ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "242.900084ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.431692ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "260.142552ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.389597ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 service list -o json
functional_test.go:1504: Took "426.269888ms" to run "out/minikube-linux-amd64 -p functional-959292 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959292 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-959292
localhost/kicbase/echo-server:functional-959292
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959292 image ls --format short --alsologtostderr:
I1206 09:43:12.778626  409755 out.go:360] Setting OutFile to fd 1 ...
I1206 09:43:12.778766  409755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:12.778778  409755 out.go:374] Setting ErrFile to fd 2...
I1206 09:43:12.778784  409755 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:12.779050  409755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:43:12.779629  409755 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:12.779768  409755 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:12.781756  409755 ssh_runner.go:195] Run: systemctl --version
I1206 09:43:12.784311  409755 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:12.784946  409755 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:12.784980  409755 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:12.785164  409755 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:12.868222  409755 ssh_runner.go:195] Run: sudo crictl images --output json
W1206 09:43:12.910077  409755 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 3036627a-d76b-4525-95c0-f910455292e2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959292 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test     │ functional-959292  │ ed4b49a58d488 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-959292  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959292 image ls --format table --alsologtostderr:
I1206 09:43:13.166983  409807 out.go:360] Setting OutFile to fd 1 ...
I1206 09:43:13.167087  409807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:13.167098  409807 out.go:374] Setting ErrFile to fd 2...
I1206 09:43:13.167103  409807 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:13.167349  409807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:43:13.167965  409807 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:13.168059  409807 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:13.170164  409807 ssh_runner.go:195] Run: systemctl --version
I1206 09:43:13.172404  409807 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:13.172818  409807 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:13.172848  409807 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:13.173003  409807 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:13.254887  409807 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959292 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"i
d":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-959292"],"size":"4943877"},{"id":"ed4b49a58d488673473d26f0f483c7997f5e468592b00b9f44654546ee3566a3","repoDigests":["localhost/minikube-local-cache-test@sha256:1004376867f4210d013c9b45b77e26a20b982d126770ccecfc8421d2fc49b257"],"repoTags":["localhost/minikube-local-cache-test:functional-959292"],"size":"3330"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager
@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},
{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3
125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959292 image ls --format json --alsologtostderr:
I1206 09:43:12.977874  409775 out.go:360] Setting OutFile to fd 1 ...
I1206 09:43:12.978004  409775 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:12.978017  409775 out.go:374] Setting ErrFile to fd 2...
I1206 09:43:12.978023  409775 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:12.978241  409775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:43:12.979006  409775 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:12.979162  409775 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:12.981533  409775 ssh_runner.go:195] Run: systemctl --version
I1206 09:43:12.984082  409775 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:12.984446  409775 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:12.984472  409775 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:12.984720  409775 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:13.065064  409775 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959292 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: ed4b49a58d488673473d26f0f483c7997f5e468592b00b9f44654546ee3566a3
repoDigests:
- localhost/minikube-local-cache-test@sha256:1004376867f4210d013c9b45b77e26a20b982d126770ccecfc8421d2fc49b257
repoTags:
- localhost/minikube-local-cache-test:functional-959292
size: "3330"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-959292
size: "4943877"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959292 image ls --format yaml --alsologtostderr:
I1206 09:43:12.778449  409756 out.go:360] Setting OutFile to fd 1 ...
I1206 09:43:12.778565  409756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:12.778570  409756 out.go:374] Setting ErrFile to fd 2...
I1206 09:43:12.778575  409756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:12.778807  409756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:43:12.779371  409756 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:12.779462  409756 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:12.781591  409756 ssh_runner.go:195] Run: systemctl --version
I1206 09:43:12.784016  409756 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:12.784459  409756 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:12.784488  409756 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:12.784626  409756 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:12.863512  409756 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh pgrep buildkitd: exit status 1 (164.094172ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image build -t localhost/my-image:functional-959292 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 image build -t localhost/my-image:functional-959292 testdata/build --alsologtostderr: (3.335252775s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-959292 image build -t localhost/my-image:functional-959292 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3f51fb4279d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-959292
--> cd7e55e515e
Successfully tagged localhost/my-image:functional-959292
cd7e55e515e310733373d669ef6ae398a9648df4305cff250d7dbf9509bcc394
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-959292 image build -t localhost/my-image:functional-959292 testdata/build --alsologtostderr:
I1206 09:43:13.142234  409797 out.go:360] Setting OutFile to fd 1 ...
I1206 09:43:13.142507  409797 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:13.142517  409797 out.go:374] Setting ErrFile to fd 2...
I1206 09:43:13.142522  409797 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:43:13.142792  409797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
I1206 09:43:13.143383  409797 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:13.144111  409797 config.go:182] Loaded profile config "functional-959292": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 09:43:13.146565  409797 ssh_runner.go:195] Run: systemctl --version
I1206 09:43:13.148663  409797 main.go:143] libmachine: domain functional-959292 has defined MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:13.149061  409797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:18:da:98", ip: ""} in network mk-functional-959292: {Iface:virbr1 ExpiryTime:2025-12-06 10:25:07 +0000 UTC Type:0 Mac:52:54:00:18:da:98 Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:functional-959292 Clientid:01:52:54:00:18:da:98}
I1206 09:43:13.149093  409797 main.go:143] libmachine: domain functional-959292 has defined IP address 192.168.39.122 and MAC address 52:54:00:18:da:98 in network mk-functional-959292
I1206 09:43:13.149299  409797 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/functional-959292/id_rsa Username:docker}
I1206 09:43:13.231666  409797 build_images.go:162] Building image from path: /tmp/build.2373189440.tar
I1206 09:43:13.231786  409797 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:43:13.244447  409797 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2373189440.tar
I1206 09:43:13.249537  409797 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2373189440.tar: stat -c "%s %y" /var/lib/minikube/build/build.2373189440.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2373189440.tar': No such file or directory
I1206 09:43:13.249576  409797 ssh_runner.go:362] scp /tmp/build.2373189440.tar --> /var/lib/minikube/build/build.2373189440.tar (3072 bytes)
I1206 09:43:13.291109  409797 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2373189440
I1206 09:43:13.305757  409797 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2373189440 -xf /var/lib/minikube/build/build.2373189440.tar
I1206 09:43:13.318076  409797 crio.go:315] Building image: /var/lib/minikube/build/build.2373189440
I1206 09:43:13.318154  409797 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-959292 /var/lib/minikube/build/build.2373189440 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1206 09:43:16.382056  409797 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-959292 /var/lib/minikube/build/build.2373189440 --cgroup-manager=cgroupfs: (3.063851329s)
I1206 09:43:16.382176  409797 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2373189440
I1206 09:43:16.395850  409797 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2373189440.tar
I1206 09:43:16.408914  409797 build_images.go:218] Built localhost/my-image:functional-959292 from /tmp/build.2373189440.tar
I1206 09:43:16.408954  409797 build_images.go:134] succeeded building to: functional-959292
I1206 09:43:16.408959  409797 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-959292
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image load --daemon kicbase/echo-server:functional-959292 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-959292 image load --daemon kicbase/echo-server:functional-959292 --alsologtostderr: (1.39991669s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image load --daemon kicbase/echo-server:functional-959292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-959292
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image load --daemon kicbase/echo-server:functional-959292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image save kicbase/echo-server:functional-959292 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image rm kicbase/echo-server:functional-959292 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-959292
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 image save --daemon kicbase/echo-server:functional-959292 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-959292
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3668097071/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.909171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:43:15.059163  396534 retry.go:31] will retry after 656.797645ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3668097071/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "sudo umount -f /mount-9p": exit status 1 (181.808826ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-959292 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3668097071/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T" /mount1: exit status 1 (179.972346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:43:16.636814  396534 retry.go:31] will retry after 532.667667ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-959292 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-959292 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-959292 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3391273670/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
E1206 09:44:04.547314  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:44:28.986751  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:45:27.614772  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-959292
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-959292
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-959292
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1206 09:49:04.546353  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:49:28.986835  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m28.395947461s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 kubectl -- rollout status deployment/busybox: (5.530517762s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-6hj9l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-smn9n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-wf2x7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-6hj9l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-smn9n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-wf2x7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-6hj9l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-smn9n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-wf2x7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-6hj9l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-6hj9l -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-smn9n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-smn9n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-wf2x7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 kubectl -- exec busybox-7b57f96db7-wf2x7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node add --alsologtostderr -v 5
E1206 09:52:32.055010  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 node add --alsologtostderr -v 5: (47.810799423s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-211811 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp testdata/cp-test.txt ha-211811:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786286928/001/cp-test_ha-211811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811:/home/docker/cp-test.txt ha-211811-m02:/home/docker/cp-test_ha-211811_ha-211811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test_ha-211811_ha-211811-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811:/home/docker/cp-test.txt ha-211811-m03:/home/docker/cp-test_ha-211811_ha-211811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test_ha-211811_ha-211811-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811:/home/docker/cp-test.txt ha-211811-m04:/home/docker/cp-test_ha-211811_ha-211811-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test_ha-211811_ha-211811-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp testdata/cp-test.txt ha-211811-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786286928/001/cp-test_ha-211811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m02:/home/docker/cp-test.txt ha-211811:/home/docker/cp-test_ha-211811-m02_ha-211811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test_ha-211811-m02_ha-211811.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m02:/home/docker/cp-test.txt ha-211811-m03:/home/docker/cp-test_ha-211811-m02_ha-211811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test_ha-211811-m02_ha-211811-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m02:/home/docker/cp-test.txt ha-211811-m04:/home/docker/cp-test_ha-211811-m02_ha-211811-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test_ha-211811-m02_ha-211811-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp testdata/cp-test.txt ha-211811-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786286928/001/cp-test_ha-211811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m03:/home/docker/cp-test.txt ha-211811:/home/docker/cp-test_ha-211811-m03_ha-211811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test_ha-211811-m03_ha-211811.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m03:/home/docker/cp-test.txt ha-211811-m02:/home/docker/cp-test_ha-211811-m03_ha-211811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test_ha-211811-m03_ha-211811-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m03:/home/docker/cp-test.txt ha-211811-m04:/home/docker/cp-test_ha-211811-m03_ha-211811-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test_ha-211811-m03_ha-211811-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp testdata/cp-test.txt ha-211811-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2786286928/001/cp-test_ha-211811-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m04:/home/docker/cp-test.txt ha-211811:/home/docker/cp-test_ha-211811-m04_ha-211811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811 "sudo cat /home/docker/cp-test_ha-211811-m04_ha-211811.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m04:/home/docker/cp-test.txt ha-211811-m02:/home/docker/cp-test_ha-211811-m04_ha-211811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m02 "sudo cat /home/docker/cp-test_ha-211811-m04_ha-211811-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 cp ha-211811-m04:/home/docker/cp-test.txt ha-211811-m03:/home/docker/cp-test_ha-211811-m04_ha-211811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 ssh -n ha-211811-m03 "sudo cat /home/docker/cp-test_ha-211811-m04_ha-211811-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node stop m02 --alsologtostderr -v 5
E1206 09:53:02.365223  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:02.371677  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:02.383151  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:02.404704  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:02.446250  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:02.527962  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:02.689682  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:03.011564  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:03.653800  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:04.935702  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:07.498817  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:12.620253  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:22.862656  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:53:43.344839  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:54:04.548299  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 node stop m02 --alsologtostderr -v 5: (1m21.948257976s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5: exit status 7 (506.678261ms)

                                                
                                                
-- stdout --
	ha-211811
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-211811-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-211811-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-211811-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:54:10.157295  414019 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:54:10.157453  414019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:54:10.157467  414019 out.go:374] Setting ErrFile to fd 2...
	I1206 09:54:10.157474  414019 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:54:10.157880  414019 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 09:54:10.158085  414019 out.go:368] Setting JSON to false
	I1206 09:54:10.158121  414019 mustload.go:66] Loading cluster: ha-211811
	I1206 09:54:10.158216  414019 notify.go:221] Checking for updates...
	I1206 09:54:10.158520  414019 config.go:182] Loaded profile config "ha-211811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 09:54:10.158539  414019 status.go:174] checking status of ha-211811 ...
	I1206 09:54:10.160970  414019 status.go:371] ha-211811 host status = "Running" (err=<nil>)
	I1206 09:54:10.160991  414019 host.go:66] Checking if "ha-211811" exists ...
	I1206 09:54:10.164334  414019 main.go:143] libmachine: domain ha-211811 has defined MAC address 52:54:00:ae:d8:ae in network mk-ha-211811
	I1206 09:54:10.164897  414019 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ae:d8:ae", ip: ""} in network mk-ha-211811: {Iface:virbr1 ExpiryTime:2025-12-06 10:48:25 +0000 UTC Type:0 Mac:52:54:00:ae:d8:ae Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-211811 Clientid:01:52:54:00:ae:d8:ae}
	I1206 09:54:10.164930  414019 main.go:143] libmachine: domain ha-211811 has defined IP address 192.168.39.112 and MAC address 52:54:00:ae:d8:ae in network mk-ha-211811
	I1206 09:54:10.165124  414019 host.go:66] Checking if "ha-211811" exists ...
	I1206 09:54:10.165343  414019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:54:10.168246  414019 main.go:143] libmachine: domain ha-211811 has defined MAC address 52:54:00:ae:d8:ae in network mk-ha-211811
	I1206 09:54:10.168844  414019 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ae:d8:ae", ip: ""} in network mk-ha-211811: {Iface:virbr1 ExpiryTime:2025-12-06 10:48:25 +0000 UTC Type:0 Mac:52:54:00:ae:d8:ae Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-211811 Clientid:01:52:54:00:ae:d8:ae}
	I1206 09:54:10.168881  414019 main.go:143] libmachine: domain ha-211811 has defined IP address 192.168.39.112 and MAC address 52:54:00:ae:d8:ae in network mk-ha-211811
	I1206 09:54:10.169099  414019 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/ha-211811/id_rsa Username:docker}
	I1206 09:54:10.255957  414019 ssh_runner.go:195] Run: systemctl --version
	I1206 09:54:10.262756  414019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:54:10.282557  414019 kubeconfig.go:125] found "ha-211811" server: "https://192.168.39.254:8443"
	I1206 09:54:10.282598  414019 api_server.go:166] Checking apiserver status ...
	I1206 09:54:10.282637  414019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:54:10.303204  414019 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	W1206 09:54:10.317551  414019 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:54:10.317623  414019 ssh_runner.go:195] Run: ls
	I1206 09:54:10.323433  414019 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1206 09:54:10.328441  414019 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1206 09:54:10.328474  414019 status.go:463] ha-211811 apiserver status = Running (err=<nil>)
	I1206 09:54:10.328488  414019 status.go:176] ha-211811 status: &{Name:ha-211811 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:54:10.328510  414019 status.go:174] checking status of ha-211811-m02 ...
	I1206 09:54:10.330206  414019 status.go:371] ha-211811-m02 host status = "Stopped" (err=<nil>)
	I1206 09:54:10.330232  414019 status.go:384] host is not running, skipping remaining checks
	I1206 09:54:10.330240  414019 status.go:176] ha-211811-m02 status: &{Name:ha-211811-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:54:10.330258  414019 status.go:174] checking status of ha-211811-m03 ...
	I1206 09:54:10.331758  414019 status.go:371] ha-211811-m03 host status = "Running" (err=<nil>)
	I1206 09:54:10.331781  414019 host.go:66] Checking if "ha-211811-m03" exists ...
	I1206 09:54:10.334398  414019 main.go:143] libmachine: domain ha-211811-m03 has defined MAC address 52:54:00:5b:31:a9 in network mk-ha-211811
	I1206 09:54:10.334859  414019 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5b:31:a9", ip: ""} in network mk-ha-211811: {Iface:virbr1 ExpiryTime:2025-12-06 10:50:24 +0000 UTC Type:0 Mac:52:54:00:5b:31:a9 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-211811-m03 Clientid:01:52:54:00:5b:31:a9}
	I1206 09:54:10.334891  414019 main.go:143] libmachine: domain ha-211811-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:5b:31:a9 in network mk-ha-211811
	I1206 09:54:10.335014  414019 host.go:66] Checking if "ha-211811-m03" exists ...
	I1206 09:54:10.335209  414019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:54:10.337186  414019 main.go:143] libmachine: domain ha-211811-m03 has defined MAC address 52:54:00:5b:31:a9 in network mk-ha-211811
	I1206 09:54:10.337574  414019 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:5b:31:a9", ip: ""} in network mk-ha-211811: {Iface:virbr1 ExpiryTime:2025-12-06 10:50:24 +0000 UTC Type:0 Mac:52:54:00:5b:31:a9 Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:ha-211811-m03 Clientid:01:52:54:00:5b:31:a9}
	I1206 09:54:10.337600  414019 main.go:143] libmachine: domain ha-211811-m03 has defined IP address 192.168.39.207 and MAC address 52:54:00:5b:31:a9 in network mk-ha-211811
	I1206 09:54:10.337788  414019 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/ha-211811-m03/id_rsa Username:docker}
	I1206 09:54:10.421287  414019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:54:10.442015  414019 kubeconfig.go:125] found "ha-211811" server: "https://192.168.39.254:8443"
	I1206 09:54:10.442047  414019 api_server.go:166] Checking apiserver status ...
	I1206 09:54:10.442086  414019 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:54:10.462662  414019 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1825/cgroup
	W1206 09:54:10.474284  414019 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1825/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:54:10.474356  414019 ssh_runner.go:195] Run: ls
	I1206 09:54:10.479494  414019 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1206 09:54:10.484481  414019 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1206 09:54:10.484512  414019 status.go:463] ha-211811-m03 apiserver status = Running (err=<nil>)
	I1206 09:54:10.484523  414019 status.go:176] ha-211811-m03 status: &{Name:ha-211811-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:54:10.484547  414019 status.go:174] checking status of ha-211811-m04 ...
	I1206 09:54:10.486174  414019 status.go:371] ha-211811-m04 host status = "Running" (err=<nil>)
	I1206 09:54:10.486196  414019 host.go:66] Checking if "ha-211811-m04" exists ...
	I1206 09:54:10.488534  414019 main.go:143] libmachine: domain ha-211811-m04 has defined MAC address 52:54:00:4b:fc:ba in network mk-ha-211811
	I1206 09:54:10.488986  414019 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:fc:ba", ip: ""} in network mk-ha-211811: {Iface:virbr1 ExpiryTime:2025-12-06 10:52:03 +0000 UTC Type:0 Mac:52:54:00:4b:fc:ba Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-211811-m04 Clientid:01:52:54:00:4b:fc:ba}
	I1206 09:54:10.489022  414019 main.go:143] libmachine: domain ha-211811-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:4b:fc:ba in network mk-ha-211811
	I1206 09:54:10.489189  414019 host.go:66] Checking if "ha-211811-m04" exists ...
	I1206 09:54:10.489410  414019 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:54:10.491655  414019 main.go:143] libmachine: domain ha-211811-m04 has defined MAC address 52:54:00:4b:fc:ba in network mk-ha-211811
	I1206 09:54:10.492106  414019 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:fc:ba", ip: ""} in network mk-ha-211811: {Iface:virbr1 ExpiryTime:2025-12-06 10:52:03 +0000 UTC Type:0 Mac:52:54:00:4b:fc:ba Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-211811-m04 Clientid:01:52:54:00:4b:fc:ba}
	I1206 09:54:10.492138  414019 main.go:143] libmachine: domain ha-211811-m04 has defined IP address 192.168.39.133 and MAC address 52:54:00:4b:fc:ba in network mk-ha-211811
	I1206 09:54:10.492279  414019 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/ha-211811-m04/id_rsa Username:docker}
	I1206 09:54:10.579043  414019 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:54:10.596435  414019 status.go:176] ha-211811-m04 status: &{Name:ha-211811-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node start m02 --alsologtostderr -v 5
E1206 09:54:24.306228  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:54:28.982995  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 node start m02 --alsologtostderr -v 5: (39.174581165s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 stop --alsologtostderr -v 5
E1206 09:55:46.228353  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:58:02.364846  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:58:30.070640  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 stop --alsologtostderr -v 5: (4m6.096043506s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 start --wait true --alsologtostderr -v 5
E1206 09:59:04.547279  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:59:28.986789  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 start --wait true --alsologtostderr -v 5: (2m1.809520753s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 node delete m03 --alsologtostderr -v 5: (17.306763621s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (242.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 stop --alsologtostderr -v 5
E1206 10:02:07.616472  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:03:02.365933  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:04:04.547645  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:04:28.985778  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 stop --alsologtostderr -v 5: (4m2.560912484s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5: exit status 7 (69.628994ms)

                                                
                                                
-- stdout --
	ha-211811
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-211811-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-211811-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:05:21.073839  417267 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:05:21.074093  417267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:05:21.074101  417267 out.go:374] Setting ErrFile to fd 2...
	I1206 10:05:21.074105  417267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:05:21.074338  417267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:05:21.074510  417267 out.go:368] Setting JSON to false
	I1206 10:05:21.074536  417267 mustload.go:66] Loading cluster: ha-211811
	I1206 10:05:21.074673  417267 notify.go:221] Checking for updates...
	I1206 10:05:21.074959  417267 config.go:182] Loaded profile config "ha-211811": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:05:21.074976  417267 status.go:174] checking status of ha-211811 ...
	I1206 10:05:21.077071  417267 status.go:371] ha-211811 host status = "Stopped" (err=<nil>)
	I1206 10:05:21.077096  417267 status.go:384] host is not running, skipping remaining checks
	I1206 10:05:21.077103  417267 status.go:176] ha-211811 status: &{Name:ha-211811 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:05:21.077121  417267 status.go:174] checking status of ha-211811-m02 ...
	I1206 10:05:21.078200  417267 status.go:371] ha-211811-m02 host status = "Stopped" (err=<nil>)
	I1206 10:05:21.078216  417267 status.go:384] host is not running, skipping remaining checks
	I1206 10:05:21.078221  417267 status.go:176] ha-211811-m02 status: &{Name:ha-211811-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:05:21.078234  417267 status.go:174] checking status of ha-211811-m04 ...
	I1206 10:05:21.079479  417267 status.go:371] ha-211811-m04 host status = "Stopped" (err=<nil>)
	I1206 10:05:21.079497  417267 status.go:384] host is not running, skipping remaining checks
	I1206 10:05:21.079503  417267 status.go:176] ha-211811-m04 status: &{Name:ha-211811-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (242.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m33.029734936s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (103.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 node add --control-plane --alsologtostderr -v 5
E1206 10:08:02.364944  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-211811 node add --control-plane --alsologtostderr -v 5: (1m42.632282363s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-211811 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (103.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.09s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-319791 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1206 10:09:04.547999  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:09:12.058843  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:09:25.432303  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:09:28.985283  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-319791 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.088290021s)
--- PASS: TestJSONOutput/start/Command (74.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-319791 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-319791 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.18s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-319791 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-319791 --output=json --user=testUser: (7.182610202s)
--- PASS: TestJSONOutput/stop/Command (7.18s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-115617 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-115617 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.489552ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2e60ebef-d462-42eb-9380-de7899925eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-115617] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"13564ba8-f733-40c8-a4e3-8fc63eb85019","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"3e6947c8-2d03-40cc-bcf6-b1f577d078a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9d0c73a7-b7ee-443c-9607-412ee223d71c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig"}}
	{"specversion":"1.0","id":"36c76835-71e2-44a8-ae5f-6b6f3664801e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube"}}
	{"specversion":"1.0","id":"9a34eafd-76ac-4dd4-9d47-d719bf287a3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d64e68b8-394f-417e-8337-dff5b9e79ba8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4a7fac0f-8ece-4d2b-9168-bf7172434bd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-115617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-115617
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-513345 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-513345 --driver=kvm2  --container-runtime=crio: (38.241825563s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-515604 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-515604 --driver=kvm2  --container-runtime=crio: (36.396664489s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-513345
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-515604
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-515604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-515604
helpers_test.go:175: Cleaning up "first-513345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-513345
--- PASS: TestMinikubeProfile (77.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-342691 --memory=3072 --mount-string /tmp/TestMountStartserial2330346033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-342691 --memory=3072 --mount-string /tmp/TestMountStartserial2330346033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.26916846s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-342691 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-342691 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-362907 --memory=3072 --mount-string /tmp/TestMountStartserial2330346033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-362907 --memory=3072 --mount-string /tmp/TestMountStartserial2330346033/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.341142131s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362907 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362907 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-342691 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362907 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362907 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-362907
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-362907: (1.309365509s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-362907
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-362907: (17.804094162s)
--- PASS: TestMountStart/serial/RestartStopped (18.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362907 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-362907 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (130.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777422 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1206 10:13:02.365289  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:04.546649  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:28.983457  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777422 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m10.543690579s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (130.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-777422 -- rollout status deployment/busybox: (4.486775866s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-ms9vq -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-s9x7w -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-ms9vq -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-s9x7w -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-ms9vq -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-s9x7w -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-ms9vq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-ms9vq -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-s9x7w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-777422 -- exec busybox-7b57f96db7-s9x7w -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-777422 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-777422 -v=5 --alsologtostderr: (41.456431106s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-777422 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp testdata/cp-test.txt multinode-777422:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3886927368/001/cp-test_multinode-777422.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422:/home/docker/cp-test.txt multinode-777422-m02:/home/docker/cp-test_multinode-777422_multinode-777422-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m02 "sudo cat /home/docker/cp-test_multinode-777422_multinode-777422-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422:/home/docker/cp-test.txt multinode-777422-m03:/home/docker/cp-test_multinode-777422_multinode-777422-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m03 "sudo cat /home/docker/cp-test_multinode-777422_multinode-777422-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp testdata/cp-test.txt multinode-777422-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3886927368/001/cp-test_multinode-777422-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422-m02:/home/docker/cp-test.txt multinode-777422:/home/docker/cp-test_multinode-777422-m02_multinode-777422.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422 "sudo cat /home/docker/cp-test_multinode-777422-m02_multinode-777422.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422-m02:/home/docker/cp-test.txt multinode-777422-m03:/home/docker/cp-test_multinode-777422-m02_multinode-777422-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m03 "sudo cat /home/docker/cp-test_multinode-777422-m02_multinode-777422-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp testdata/cp-test.txt multinode-777422-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3886927368/001/cp-test_multinode-777422-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422-m03:/home/docker/cp-test.txt multinode-777422:/home/docker/cp-test_multinode-777422-m03_multinode-777422.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422 "sudo cat /home/docker/cp-test_multinode-777422-m03_multinode-777422.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 cp multinode-777422-m03:/home/docker/cp-test.txt multinode-777422-m02:/home/docker/cp-test_multinode-777422-m03_multinode-777422-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 ssh -n multinode-777422-m02 "sudo cat /home/docker/cp-test_multinode-777422-m03_multinode-777422-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-777422 node stop m03: (1.543663099s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777422 status: exit status 7 (326.439004ms)

                                                
                                                
-- stdout --
	multinode-777422
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-777422-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-777422-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr: exit status 7 (333.156372ms)

                                                
                                                
-- stdout --
	multinode-777422
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-777422-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-777422-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:15:36.401338  422967 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:15:36.401616  422967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:15:36.401627  422967 out.go:374] Setting ErrFile to fd 2...
	I1206 10:15:36.401631  422967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:15:36.401839  422967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:15:36.402001  422967 out.go:368] Setting JSON to false
	I1206 10:15:36.402028  422967 mustload.go:66] Loading cluster: multinode-777422
	I1206 10:15:36.402116  422967 notify.go:221] Checking for updates...
	I1206 10:15:36.402524  422967 config.go:182] Loaded profile config "multinode-777422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:15:36.402548  422967 status.go:174] checking status of multinode-777422 ...
	I1206 10:15:36.404976  422967 status.go:371] multinode-777422 host status = "Running" (err=<nil>)
	I1206 10:15:36.404998  422967 host.go:66] Checking if "multinode-777422" exists ...
	I1206 10:15:36.407672  422967 main.go:143] libmachine: domain multinode-777422 has defined MAC address 52:54:00:48:14:21 in network mk-multinode-777422
	I1206 10:15:36.408107  422967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:14:21", ip: ""} in network mk-multinode-777422: {Iface:virbr1 ExpiryTime:2025-12-06 11:12:42 +0000 UTC Type:0 Mac:52:54:00:48:14:21 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-777422 Clientid:01:52:54:00:48:14:21}
	I1206 10:15:36.408139  422967 main.go:143] libmachine: domain multinode-777422 has defined IP address 192.168.39.237 and MAC address 52:54:00:48:14:21 in network mk-multinode-777422
	I1206 10:15:36.408263  422967 host.go:66] Checking if "multinode-777422" exists ...
	I1206 10:15:36.408452  422967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:15:36.410650  422967 main.go:143] libmachine: domain multinode-777422 has defined MAC address 52:54:00:48:14:21 in network mk-multinode-777422
	I1206 10:15:36.411084  422967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:48:14:21", ip: ""} in network mk-multinode-777422: {Iface:virbr1 ExpiryTime:2025-12-06 11:12:42 +0000 UTC Type:0 Mac:52:54:00:48:14:21 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-777422 Clientid:01:52:54:00:48:14:21}
	I1206 10:15:36.411109  422967 main.go:143] libmachine: domain multinode-777422 has defined IP address 192.168.39.237 and MAC address 52:54:00:48:14:21 in network mk-multinode-777422
	I1206 10:15:36.411258  422967 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/multinode-777422/id_rsa Username:docker}
	I1206 10:15:36.502125  422967 ssh_runner.go:195] Run: systemctl --version
	I1206 10:15:36.509175  422967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:15:36.525321  422967 kubeconfig.go:125] found "multinode-777422" server: "https://192.168.39.237:8443"
	I1206 10:15:36.525357  422967 api_server.go:166] Checking apiserver status ...
	I1206 10:15:36.525398  422967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:15:36.546489  422967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup
	W1206 10:15:36.557400  422967 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:15:36.557455  422967 ssh_runner.go:195] Run: ls
	I1206 10:15:36.562421  422967 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I1206 10:15:36.567360  422967 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I1206 10:15:36.567384  422967 status.go:463] multinode-777422 apiserver status = Running (err=<nil>)
	I1206 10:15:36.567393  422967 status.go:176] multinode-777422 status: &{Name:multinode-777422 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:15:36.567410  422967 status.go:174] checking status of multinode-777422-m02 ...
	I1206 10:15:36.569081  422967 status.go:371] multinode-777422-m02 host status = "Running" (err=<nil>)
	I1206 10:15:36.569101  422967 host.go:66] Checking if "multinode-777422-m02" exists ...
	I1206 10:15:36.571640  422967 main.go:143] libmachine: domain multinode-777422-m02 has defined MAC address 52:54:00:3b:e6:84 in network mk-multinode-777422
	I1206 10:15:36.572023  422967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:e6:84", ip: ""} in network mk-multinode-777422: {Iface:virbr1 ExpiryTime:2025-12-06 11:14:09 +0000 UTC Type:0 Mac:52:54:00:3b:e6:84 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:multinode-777422-m02 Clientid:01:52:54:00:3b:e6:84}
	I1206 10:15:36.572053  422967 main.go:143] libmachine: domain multinode-777422-m02 has defined IP address 192.168.39.105 and MAC address 52:54:00:3b:e6:84 in network mk-multinode-777422
	I1206 10:15:36.572222  422967 host.go:66] Checking if "multinode-777422-m02" exists ...
	I1206 10:15:36.572445  422967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:15:36.574557  422967 main.go:143] libmachine: domain multinode-777422-m02 has defined MAC address 52:54:00:3b:e6:84 in network mk-multinode-777422
	I1206 10:15:36.574948  422967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3b:e6:84", ip: ""} in network mk-multinode-777422: {Iface:virbr1 ExpiryTime:2025-12-06 11:14:09 +0000 UTC Type:0 Mac:52:54:00:3b:e6:84 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:multinode-777422-m02 Clientid:01:52:54:00:3b:e6:84}
	I1206 10:15:36.574970  422967 main.go:143] libmachine: domain multinode-777422-m02 has defined IP address 192.168.39.105 and MAC address 52:54:00:3b:e6:84 in network mk-multinode-777422
	I1206 10:15:36.575107  422967 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22047-392561/.minikube/machines/multinode-777422-m02/id_rsa Username:docker}
	I1206 10:15:36.653679  422967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:15:36.669562  422967 status.go:176] multinode-777422-m02 status: &{Name:multinode-777422-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:15:36.669602  422967 status.go:174] checking status of multinode-777422-m03 ...
	I1206 10:15:36.671267  422967 status.go:371] multinode-777422-m03 host status = "Stopped" (err=<nil>)
	I1206 10:15:36.671289  422967 status.go:384] host is not running, skipping remaining checks
	I1206 10:15:36.671296  422967 status.go:176] multinode-777422-m03 status: &{Name:multinode-777422-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-777422 node start m03 -v=5 --alsologtostderr: (41.396048954s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (336.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-777422
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-777422
E1206 10:18:02.367587  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:18:47.620369  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:19:04.547557  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-777422: (3m0.974749048s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777422 --wait=true -v=5 --alsologtostderr
E1206 10:19:28.982701  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777422 --wait=true -v=5 --alsologtostderr: (2m35.004100376s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-777422
--- PASS: TestMultiNode/serial/RestartKeepsNodes (336.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-777422 node delete m03: (2.112843617s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (158.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 stop
E1206 10:23:02.367112  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:24:04.547587  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:24:28.986550  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-777422 stop: (2m38.649795643s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777422 status: exit status 7 (68.292497ms)

                                                
                                                
-- stdout --
	multinode-777422
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-777422-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr: exit status 7 (67.61886ms)

                                                
                                                
-- stdout --
	multinode-777422
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-777422-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:24:36.075580  425866 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:24:36.075883  425866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:24:36.075894  425866 out.go:374] Setting ErrFile to fd 2...
	I1206 10:24:36.075898  425866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:24:36.076109  425866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:24:36.076270  425866 out.go:368] Setting JSON to false
	I1206 10:24:36.076293  425866 mustload.go:66] Loading cluster: multinode-777422
	I1206 10:24:36.076438  425866 notify.go:221] Checking for updates...
	I1206 10:24:36.076651  425866 config.go:182] Loaded profile config "multinode-777422": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:24:36.076666  425866 status.go:174] checking status of multinode-777422 ...
	I1206 10:24:36.078887  425866 status.go:371] multinode-777422 host status = "Stopped" (err=<nil>)
	I1206 10:24:36.078909  425866 status.go:384] host is not running, skipping remaining checks
	I1206 10:24:36.078916  425866 status.go:176] multinode-777422 status: &{Name:multinode-777422 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 10:24:36.078940  425866 status.go:174] checking status of multinode-777422-m02 ...
	I1206 10:24:36.080297  425866 status.go:371] multinode-777422-m02 host status = "Stopped" (err=<nil>)
	I1206 10:24:36.080313  425866 status.go:384] host is not running, skipping remaining checks
	I1206 10:24:36.080317  425866 status.go:176] multinode-777422-m02 status: &{Name:multinode-777422-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (158.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777422 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1206 10:25:52.061967  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777422 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m24.109086364s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-777422 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-777422
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777422-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-777422-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (84.538449ms)

                                                
                                                
-- stdout --
	* [multinode-777422-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-777422-m02' is duplicated with machine name 'multinode-777422-m02' in profile 'multinode-777422'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-777422-m03 --driver=kvm2  --container-runtime=crio
E1206 10:26:05.434230  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-777422-m03 --driver=kvm2  --container-runtime=crio: (37.007590244s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-777422
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-777422: exit status 80 (218.144988ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-777422 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-777422-m03 already exists in multinode-777422-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-777422-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.22s)

                                                
                                    
x
+
TestScheduledStopUnix (107.22s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-289786 --memory=3072 --driver=kvm2  --container-runtime=crio
E1206 10:29:28.985246  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-289786 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.483250279s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-289786 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:29:42.702623  428124 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:42.702895  428124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:42.702903  428124 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:42.702907  428124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:42.703146  428124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:29:42.703379  428124 out.go:368] Setting JSON to false
	I1206 10:29:42.703466  428124 mustload.go:66] Loading cluster: scheduled-stop-289786
	I1206 10:29:42.703782  428124 config.go:182] Loaded profile config "scheduled-stop-289786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:42.703849  428124 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/config.json ...
	I1206 10:29:42.704030  428124 mustload.go:66] Loading cluster: scheduled-stop-289786
	I1206 10:29:42.704128  428124 config.go:182] Loaded profile config "scheduled-stop-289786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-289786 -n scheduled-stop-289786
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-289786 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:29:43.015608  428167 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:43.015730  428167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:43.015738  428167 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:43.015744  428167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:43.015985  428167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:29:43.016243  428167 out.go:368] Setting JSON to false
	I1206 10:29:43.016455  428167 daemonize_unix.go:73] killing process 428157 as it is an old scheduled stop
	I1206 10:29:43.016571  428167 mustload.go:66] Loading cluster: scheduled-stop-289786
	I1206 10:29:43.016954  428167 config.go:182] Loaded profile config "scheduled-stop-289786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:43.017030  428167 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/config.json ...
	I1206 10:29:43.017218  428167 mustload.go:66] Loading cluster: scheduled-stop-289786
	I1206 10:29:43.017333  428167 config.go:182] Loaded profile config "scheduled-stop-289786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 10:29:43.021980  396534 retry.go:31] will retry after 87.598µs: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.023202  396534 retry.go:31] will retry after 161.781µs: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.024403  396534 retry.go:31] will retry after 152.964µs: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.025638  396534 retry.go:31] will retry after 362.553µs: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.026815  396534 retry.go:31] will retry after 348.963µs: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.028005  396534 retry.go:31] will retry after 1.042505ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.029156  396534 retry.go:31] will retry after 659.364µs: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.030346  396534 retry.go:31] will retry after 1.270831ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.032624  396534 retry.go:31] will retry after 2.053209ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.034943  396534 retry.go:31] will retry after 4.920691ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.040221  396534 retry.go:31] will retry after 4.400278ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.045520  396534 retry.go:31] will retry after 6.415017ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.052791  396534 retry.go:31] will retry after 15.498738ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.069117  396534 retry.go:31] will retry after 24.549114ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.094555  396534 retry.go:31] will retry after 21.029253ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
I1206 10:29:43.115882  396534 retry.go:31] will retry after 50.483941ms: open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-289786 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-289786 -n scheduled-stop-289786
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-289786
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-289786 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:30:08.781065  428318 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:08.781207  428318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:08.781218  428318 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:08.781222  428318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:08.781464  428318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:30:08.781774  428318 out.go:368] Setting JSON to false
	I1206 10:30:08.781875  428318 mustload.go:66] Loading cluster: scheduled-stop-289786
	I1206 10:30:08.782268  428318 config.go:182] Loaded profile config "scheduled-stop-289786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:30:08.782349  428318 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/scheduled-stop-289786/config.json ...
	I1206 10:30:08.782572  428318 mustload.go:66] Loading cluster: scheduled-stop-289786
	I1206 10:30:08.782696  428318 config.go:182] Loaded profile config "scheduled-stop-289786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-289786
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-289786: exit status 7 (66.948344ms)

                                                
                                                
-- stdout --
	scheduled-stop-289786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-289786 -n scheduled-stop-289786
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-289786 -n scheduled-stop-289786: exit status 7 (67.324412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-289786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-289786
--- PASS: TestScheduledStopUnix (107.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (406.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2504876889 start -p running-upgrade-976040 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2504876889 start -p running-upgrade-976040 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m29.307636417s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-976040 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-976040 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m12.68123229s)
helpers_test.go:175: Cleaning up "running-upgrade-976040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-976040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-976040: (1.155210413s)
--- PASS: TestRunningBinaryUpgrade (406.45s)

                                                
                                    
x
+
TestKubernetesUpgrade (148.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.877196427s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-280117
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-280117: (1.964154333s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-280117 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-280117 status --format={{.Host}}: exit status 7 (69.623779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m3.865276418s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-280117 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (86.56409ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-280117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-280117
	    minikube start -p kubernetes-upgrade-280117 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2801172 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-280117 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-280117 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.946965478s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-280117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-280117
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-280117: (1.061147579s)
--- PASS: TestKubernetesUpgrade (148.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-012243 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-012243 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (102.681843ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-012243] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-012243 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-012243 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.54718342s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-012243 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.608788603 start -p stopped-upgrade-368867 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.608788603 start -p stopped-upgrade-368867 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (56.325754961s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.608788603 -p stopped-upgrade-368867 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.608788603 -p stopped-upgrade-368867 stop: (1.728358804s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-368867 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-368867 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.655191212s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (42.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-012243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-012243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.924044117s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-012243 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-012243 status -o json: exit status 2 (217.606059ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-012243","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-012243
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (42.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-012243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1206 10:33:02.365514  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-012243 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.201109647s)
--- PASS: TestNoKubernetes/serial/Start (31.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22047-392561/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-012243 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-012243 "sudo systemctl is-active --quiet service kubelet": exit status 1 (175.959086ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.059708303s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (13.395245534s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-368867
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-368867: (1.047473478s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-012243
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-012243: (1.442771704s)
--- PASS: TestNoKubernetes/serial/Stop (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-012243 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-012243 --driver=kvm2  --container-runtime=crio: (21.561321668s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-777177 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-777177 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (158.548437ms)

                                                
                                                
-- stdout --
	* [false-777177] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:33:59.412830  432039 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:33:59.412966  432039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:33:59.412976  432039 out.go:374] Setting ErrFile to fd 2...
	I1206 10:33:59.412983  432039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:33:59.413301  432039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-392561/.minikube/bin
	I1206 10:33:59.413934  432039 out.go:368] Setting JSON to false
	I1206 10:33:59.415220  432039 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8179,"bootTime":1765009060,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 10:33:59.415300  432039 start.go:143] virtualization: kvm guest
	I1206 10:33:59.417770  432039 out.go:179] * [false-777177] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 10:33:59.419200  432039 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:33:59.419281  432039 notify.go:221] Checking for updates...
	I1206 10:33:59.422267  432039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:33:59.423590  432039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-392561/kubeconfig
	I1206 10:33:59.424930  432039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-392561/.minikube
	I1206 10:33:59.429029  432039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 10:33:59.430614  432039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:33:59.432598  432039 config.go:182] Loaded profile config "NoKubernetes-012243": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1206 10:33:59.432772  432039 config.go:182] Loaded profile config "force-systemd-env-294790": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:33:59.432901  432039 config.go:182] Loaded profile config "running-upgrade-976040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1206 10:33:59.433055  432039 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:33:59.482290  432039 out.go:179] * Using the kvm2 driver based on user configuration
	I1206 10:33:59.483947  432039 start.go:309] selected driver: kvm2
	I1206 10:33:59.483971  432039 start.go:927] validating driver "kvm2" against <nil>
	I1206 10:33:59.483989  432039 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:33:59.486149  432039 out.go:203] 
	W1206 10:33:59.487805  432039 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1206 10:33:59.489194  432039 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-777177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-777177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:33:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.144:8443
name: running-upgrade-976040
contexts:
- context:
cluster: running-upgrade-976040
user: running-upgrade-976040
name: running-upgrade-976040
current-context: ""
kind: Config
users:
- name: running-upgrade-976040
user:
client-certificate: /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/running-upgrade-976040/client.crt
client-key: /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/running-upgrade-976040/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-777177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-777177"

                                                
                                                
----------------------- debugLogs end: false-777177 [took: 4.174276348s] --------------------------------
helpers_test.go:175: Cleaning up "false-777177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-777177
--- PASS: TestNetworkPlugins/group/false (4.81s)

                                                
                                    
x
+
TestISOImage/Setup (26.34s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-968200 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-968200 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.342976282s)
--- PASS: TestISOImage/Setup (26.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-012243 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-012243 "sudo systemctl is-active --quiet service kubelet": exit status 1 (177.253121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestPause/serial/Start (90.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-672164 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-672164 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m30.891084135s)
--- PASS: TestPause/serial/Start (90.89s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-147016 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-147016 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (58.201609706s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-336945 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-336945 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m37.88919312s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-147016 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f562d459-a16a-4aca-881b-cd11bdfc8559] Pending
helpers_test.go:352: "busybox" [f562d459-a16a-4aca-881b-cd11bdfc8559] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f562d459-a16a-4aca-881b-cd11bdfc8559] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00618058s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-147016 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-132696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-132696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m24.767565532s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-147016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-147016 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.160056098s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-147016 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-147016 --alsologtostderr -v=3
E1206 10:38:02.365108  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-147016 --alsologtostderr -v=3: (1m27.582597573s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-336945 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [07e7a6b3-65b9-44be-a263-0d5b5d3288c3] Pending
helpers_test.go:352: "busybox" [07e7a6b3-65b9-44be-a263-0d5b5d3288c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [07e7a6b3-65b9-44be-a263-0d5b5d3288c3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004915459s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-336945 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-336945 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-336945 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.024067053s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-336945 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (87.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-336945 --alsologtostderr -v=3
E1206 10:39:04.547249  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-336945 --alsologtostderr -v=3: (1m27.982477546s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (87.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-132696 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7ae70502-c3b5-4d27-a9fc-c09192a5879e] Pending
helpers_test.go:352: "busybox" [7ae70502-c3b5-4d27-a9fc-c09192a5879e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7ae70502-c3b5-4d27-a9fc-c09192a5879e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004553419s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-132696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-132696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-132696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001230776s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-132696 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-147016 -n old-k8s-version-147016
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-147016 -n old-k8s-version-147016: exit status 7 (71.696095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-147016 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-147016 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-147016 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (45.177746008s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-147016 -n old-k8s-version-147016
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (74.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-132696 --alsologtostderr -v=3
E1206 10:39:28.982908  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-132696 --alsologtostderr -v=3: (1m14.551875939s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (74.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vwczj" [b1f21ab5-f30c-49e4-922f-8c4dfb86db76] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vwczj" [b1f21ab5-f30c-49e4-922f-8c4dfb86db76] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004419897s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336945 -n no-preload-336945
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336945 -n no-preload-336945: exit status 7 (67.881724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-336945 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-336945 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-336945 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (52.004041167s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-336945 -n no-preload-336945
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-vwczj" [b1f21ab5-f30c-49e4-922f-8c4dfb86db76] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003690038s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-147016 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-147016 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-147016 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-147016 -n old-k8s-version-147016
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-147016 -n old-k8s-version-147016: exit status 2 (244.187117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-147016 -n old-k8s-version-147016
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-147016 -n old-k8s-version-147016: exit status 2 (246.211963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-147016 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-147016 -n old-k8s-version-147016
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-147016 -n old-k8s-version-147016
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-886394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-886394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (54.963744413s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132696 -n embed-certs-132696
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132696 -n embed-certs-132696: exit status 7 (89.806303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-132696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-132696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-132696 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (52.467141759s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-132696 -n embed-certs-132696
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-gqjl9" [2bd4e948-9837-433b-a603-f820cd6e2d74] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-gqjl9" [2bd4e948-9837-433b-a603-f820cd6e2d74] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.006710656s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-gqjl9" [2bd4e948-9837-433b-a603-f820cd6e2d74] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006633983s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-336945 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-886394 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [41636b47-2d20-4c4d-bca4-3bc1d871d86f] Pending
helpers_test.go:352: "busybox" [41636b47-2d20-4c4d-bca4-3bc1d871d86f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [41636b47-2d20-4c4d-bca4-3bc1d871d86f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.005117202s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-886394 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-336945 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-336945 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336945 -n no-preload-336945
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336945 -n no-preload-336945: exit status 2 (244.218462ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-336945 -n no-preload-336945
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-336945 -n no-preload-336945: exit status 2 (244.870828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-336945 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-336945 -n no-preload-336945
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-336945 -n no-preload-336945
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lgx79" [69f08bb5-e722-47c6-b94e-bfb43133a4d0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lgx79" [69f08bb5-e722-47c6-b94e-bfb43133a4d0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004380451s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-305648 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-305648 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (45.211691731s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-886394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-886394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.577785331s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-886394 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-886394 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-886394 --alsologtostderr -v=3: (1m28.117792661s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lgx79" [69f08bb5-e722-47c6-b94e-bfb43133a4d0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004378284s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-132696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-132696 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-132696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132696 -n embed-certs-132696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132696 -n embed-certs-132696: exit status 2 (248.947334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-132696 -n embed-certs-132696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-132696 -n embed-certs-132696: exit status 2 (242.828791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-132696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-132696 -n embed-certs-132696
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-132696 -n embed-certs-132696
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m26.976965083s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-305648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-305648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.055813168s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-305648 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-305648 --alsologtostderr -v=3: (7.817065189s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305648 -n newest-cni-305648
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305648 -n newest-cni-305648: exit status 7 (83.109167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-305648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-305648 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 10:42:32.063847  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.322792  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.329254  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.340794  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.362262  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.403850  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.485501  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.647309  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:36.969241  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:37.611297  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:38.893516  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:41.455175  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:45.435827  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:46.577274  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-305648 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (34.837518402s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305648 -n newest-cni-305648
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-305648 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-305648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-305648 --alsologtostderr -v=1: (1.489415233s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305648 -n newest-cni-305648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305648 -n newest-cni-305648: exit status 2 (310.833496ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-305648 -n newest-cni-305648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-305648 -n newest-cni-305648: exit status 2 (287.542716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-305648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305648 -n newest-cni-305648
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-305648 -n newest-cni-305648
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394: exit status 7 (87.843618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-886394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-886394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1206 10:42:56.819282  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-886394 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (47.992947853s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (116.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1206 10:43:02.364707  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-959292/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m56.584022221s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (116.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-777177 "pgrep -a kubelet"
I1206 10:43:10.819290  396534 config.go:182] Loaded profile config "auto-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2rsd7" [b2e32949-5094-4b22-b042-19a311f3170d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2rsd7" [b2e32949-5094-4b22-b042-19a311f3170d] Running
E1206 10:43:17.300642  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004860618s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m22.600260921s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xdnxb" [852668b2-f406-40d5-97e3-de41750b8250] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1206 10:43:47.720374  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/no-preload-336945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xdnxb" [852668b2-f406-40d5-97e3-de41750b8250] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003736122s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xdnxb" [852668b2-f406-40d5-97e3-de41750b8250] Running
E1206 10:43:58.262944  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/old-k8s-version-147016/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006303561s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-886394 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-886394 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-886394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394
E1206 10:44:04.546676  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394: exit status 2 (257.049315ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394: exit status 2 (237.585342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-886394 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-886394 -n default-k8s-diff-port-886394
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1206 10:44:28.982840  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/addons-774690/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m17.056019109s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1206 10:44:49.163531  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/no-preload-336945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.597007994s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-w92zp" [eaea01ed-ca74-4d82-b860-ddeafbdff8c5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006087395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-k9w87" [e3f8be28-954a-4e12-82ed-68e777a9e427] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008771163s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-777177 "pgrep -a kubelet"
I1206 10:45:03.917526  396534 config.go:182] Loaded profile config "kindnet-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l8lcq" [aa89c9b4-7519-48b5-b2a1-504ee70bf988] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l8lcq" [aa89c9b4-7519-48b5-b2a1-504ee70bf988] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004733299s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-777177 "pgrep -a kubelet"
I1206 10:45:07.143079  396534 config.go:182] Loaded profile config "calico-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-szxlh" [569f2890-f8b0-4e21-992f-e11749407b84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-szxlh" [569f2890-f8b0-4e21-992f-e11749407b84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005347229s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-777177 "pgrep -a kubelet"
I1206 10:45:25.668966  396534 config.go:182] Loaded profile config "custom-flannel-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n6cps" [58467230-8547-44e6-983a-6c2d3e3b1c10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n6cps" [58467230-8547-44e6-983a-6c2d3e3b1c10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005209879s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (77.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m17.114693215s)
--- PASS: TestNetworkPlugins/group/flannel/Start (77.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (96.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-777177 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m36.72786725s)
--- PASS: TestNetworkPlugins/group/bridge/Start (96.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.19s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.18s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1764843329-22032
iso_test.go:118:   kicbase_version: v0.0.48-1764169655-21974
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e
--- PASS: TestISOImage/VersionJSON (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-968200 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-777177 "pgrep -a kubelet"
E1206 10:46:11.085307  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/no-preload-336945/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1206 10:46:11.091946  396534 config.go:182] Loaded profile config "enable-default-cni-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wcdp9" [ac502c7e-fbe8-4d11-a312-6bbd421f32aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 10:46:16.641929  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:16.648410  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:16.659988  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:16.682076  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:16.724193  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:16.805870  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:16.967871  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:17.289805  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wcdp9" [ac502c7e-fbe8-4d11-a312-6bbd421f32aa] Running
E1206 10:46:17.931139  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:19.213335  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:21.775765  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004585797s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-2xwjf" [26bcf811-f03c-4731-ac60-1ebbf325db5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004630951s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-777177 "pgrep -a kubelet"
I1206 10:46:54.071121  396534 config.go:182] Loaded profile config "flannel-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nzmwr" [bb4419a6-cb51-41c2-bd05-362a0ea09b61] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 10:46:57.621818  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/default-k8s-diff-port-886394/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nzmwr" [bb4419a6-cb51-41c2-bd05-362a0ea09b61] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004061095s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-777177 "pgrep -a kubelet"
I1206 10:47:11.907083  396534 config.go:182] Loaded profile config "bridge-777177": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-777177 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8d7bs" [595347aa-d9e6-4bde-9d15-13efc9f4d39b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8d7bs" [595347aa-d9e6-4bde-9d15-13efc9f4d39b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005173291s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-777177 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-777177 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.31
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
132 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
135 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
362 TestStartStop/group/disable-driver-mounts 0.19
376 TestNetworkPlugins/group/kubenet 3.96
385 TestNetworkPlugins/group/cilium 4.75
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-774690 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-985928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-985928
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-777177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-777177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:33:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.144:8443
name: running-upgrade-976040
contexts:
- context:
cluster: running-upgrade-976040
user: running-upgrade-976040
name: running-upgrade-976040
current-context: ""
kind: Config
users:
- name: running-upgrade-976040
user:
client-certificate: /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/running-upgrade-976040/client.crt
client-key: /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/running-upgrade-976040/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-777177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-777177"

                                                
                                                
----------------------- debugLogs end: kubenet-777177 [took: 3.733053762s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-777177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-777177
--- SKIP: TestNetworkPlugins/group/kubenet (3.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1206 10:34:04.546519  396534 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/functional-310626/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-777177 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-777177" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22047-392561/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 06 Dec 2025 10:33:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.144:8443
name: running-upgrade-976040
contexts:
- context:
cluster: running-upgrade-976040
user: running-upgrade-976040
name: running-upgrade-976040
current-context: ""
kind: Config
users:
- name: running-upgrade-976040
user:
client-certificate: /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/running-upgrade-976040/client.crt
client-key: /home/jenkins/minikube-integration/22047-392561/.minikube/profiles/running-upgrade-976040/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-777177

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-777177" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-777177"

                                                
                                                
----------------------- debugLogs end: cilium-777177 [took: 4.552329985s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-777177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-777177
--- SKIP: TestNetworkPlugins/group/cilium (4.75s)

                                                
                                    
Copied to clipboard