Test Report: KVM_Linux_crio 22168

                    
                      9b787847521167b42f6debd67da4dc2d018928d7:2025-12-17:42812
                    
                

Test fail (14/431)

x
+
TestAddons/parallel/Ingress (159.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-262069 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-262069 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-262069 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [2d05a1d3-b173-402d-b417-d11ed3f1e38b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [2d05a1d3-b173-402d-b417-d11ed3f1e38b] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.007972185s
I1217 00:09:23.414929   17074 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-262069 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.225273339s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-262069 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.183
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-262069 -n addons-262069
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 logs -n 25: (1.500623001s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-330283                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-330283 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ start   │ --download-only -p binary-mirror-467623 --alsologtostderr --binary-mirror http://127.0.0.1:43951 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-467623 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ delete  │ -p binary-mirror-467623                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-467623 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ addons  │ disable dashboard -p addons-262069                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ addons  │ enable dashboard -p addons-262069                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ start   │ -p addons-262069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:08 UTC │
	│ addons  │ addons-262069 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:08 UTC │ 17 Dec 25 00:08 UTC │
	│ addons  │ addons-262069 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:08 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ enable headlamp -p addons-262069 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ ip      │ addons-262069 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-262069                                                                                                                                                                                                                                                                                                                                                                                         │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ ssh     │ addons-262069 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │                     │
	│ addons  │ addons-262069 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ ssh     │ addons-262069 ssh cat /opt/local-path-provisioner/pvc-3eafbabf-bda1-4678-87d0-9af3d5bc37b7_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ addons  │ addons-262069 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:09 UTC │ 17 Dec 25 00:09 UTC │
	│ ip      │ addons-262069 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-262069        │ jenkins │ v1.37.0 │ 17 Dec 25 00:11 UTC │ 17 Dec 25 00:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:06:29
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:06:29.344840   17911 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:29.345113   17911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:29.345122   17911 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:29.345127   17911 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:29.345317   17911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:06:29.345802   17911 out.go:368] Setting JSON to false
	I1217 00:06:29.346677   17911 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2935,"bootTime":1765927054,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:06:29.346729   17911 start.go:143] virtualization: kvm guest
	I1217 00:06:29.348924   17911 out.go:179] * [addons-262069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:06:29.350295   17911 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:06:29.350312   17911 notify.go:221] Checking for updates...
	I1217 00:06:29.353771   17911 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:06:29.355236   17911 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:06:29.356587   17911 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:06:29.357980   17911 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:06:29.359290   17911 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:06:29.360868   17911 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:06:29.391560   17911 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 00:06:29.392842   17911 start.go:309] selected driver: kvm2
	I1217 00:06:29.392855   17911 start.go:927] validating driver "kvm2" against <nil>
	I1217 00:06:29.392864   17911 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:06:29.393596   17911 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:06:29.393822   17911 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:06:29.393862   17911 cni.go:84] Creating CNI manager for ""
	I1217 00:06:29.393905   17911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:06:29.393913   17911 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:06:29.393959   17911 start.go:353] cluster config:
	{Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1217 00:06:29.394078   17911 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:06:29.395574   17911 out.go:179] * Starting "addons-262069" primary control-plane node in "addons-262069" cluster
	I1217 00:06:29.396649   17911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:06:29.396683   17911 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 00:06:29.396691   17911 cache.go:65] Caching tarball of preloaded images
	I1217 00:06:29.396778   17911 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:06:29.396793   17911 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 00:06:29.397086   17911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/config.json ...
	I1217 00:06:29.397108   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/config.json: {Name:mke599731771ab4633d490c64f121491f04633f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:29.397272   17911 start.go:360] acquireMachinesLock for addons-262069: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 00:06:29.397337   17911 start.go:364] duration metric: took 47.711µs to acquireMachinesLock for "addons-262069"
	I1217 00:06:29.397360   17911 start.go:93] Provisioning new machine with config: &{Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:06:29.397425   17911 start.go:125] createHost starting for "" (driver="kvm2")
	I1217 00:06:29.399106   17911 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1217 00:06:29.399260   17911 start.go:159] libmachine.API.Create for "addons-262069" (driver="kvm2")
	I1217 00:06:29.399287   17911 client.go:173] LocalClient.Create starting
	I1217 00:06:29.399403   17911 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem
	I1217 00:06:29.423361   17911 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem
	I1217 00:06:29.547071   17911 main.go:143] libmachine: creating domain...
	I1217 00:06:29.547091   17911 main.go:143] libmachine: creating network...
	I1217 00:06:29.548549   17911 main.go:143] libmachine: found existing default network
	I1217 00:06:29.548795   17911 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 00:06:29.549398   17911 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b737b0}
	I1217 00:06:29.549515   17911 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-262069</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 00:06:29.555796   17911 main.go:143] libmachine: creating private network mk-addons-262069 192.168.39.0/24...
	I1217 00:06:29.625854   17911 main.go:143] libmachine: private network mk-addons-262069 192.168.39.0/24 created
	I1217 00:06:29.626160   17911 main.go:143] libmachine: <network>
	  <name>mk-addons-262069</name>
	  <uuid>e703ee39-5ac4-4765-b8b5-6f6ef651ada0</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:2c:cd:ea'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1217 00:06:29.626197   17911 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069 ...
	I1217 00:06:29.626231   17911 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22168-12839/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 00:06:29.626257   17911 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:06:29.626324   17911 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22168-12839/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22168-12839/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso...
	I1217 00:06:29.887825   17911 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa...
	I1217 00:06:30.001145   17911 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/addons-262069.rawdisk...
	I1217 00:06:30.001193   17911 main.go:143] libmachine: Writing magic tar header
	I1217 00:06:30.001217   17911 main.go:143] libmachine: Writing SSH key tar header
	I1217 00:06:30.001335   17911 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069 ...
	I1217 00:06:30.001427   17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069
	I1217 00:06:30.001455   17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069 (perms=drwx------)
	I1217 00:06:30.001475   17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839/.minikube/machines
	I1217 00:06:30.001501   17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839/.minikube/machines (perms=drwxr-xr-x)
	I1217 00:06:30.001527   17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:06:30.001541   17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839/.minikube (perms=drwxr-xr-x)
	I1217 00:06:30.001558   17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22168-12839
	I1217 00:06:30.001576   17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22168-12839 (perms=drwxrwxr-x)
	I1217 00:06:30.001594   17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1217 00:06:30.001609   17911 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1217 00:06:30.001625   17911 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1217 00:06:30.001644   17911 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1217 00:06:30.001661   17911 main.go:143] libmachine: checking permissions on dir: /home
	I1217 00:06:30.001674   17911 main.go:143] libmachine: skipping /home - not owner
	I1217 00:06:30.001680   17911 main.go:143] libmachine: defining domain...
	I1217 00:06:30.002877   17911 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-262069</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/addons-262069.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-262069'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1217 00:06:30.011557   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:d9:3f:b2 in network default
	I1217 00:06:30.012356   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:30.012375   17911 main.go:143] libmachine: starting domain...
	I1217 00:06:30.012380   17911 main.go:143] libmachine: ensuring networks are active...
	I1217 00:06:30.013245   17911 main.go:143] libmachine: Ensuring network default is active
	I1217 00:06:30.013715   17911 main.go:143] libmachine: Ensuring network mk-addons-262069 is active
	I1217 00:06:30.014461   17911 main.go:143] libmachine: getting domain XML...
	I1217 00:06:30.015650   17911 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-262069</name>
	  <uuid>c11e3475-a333-4013-be6a-553f88d11a60</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/addons-262069.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:78:11:d8'/>
	      <source network='mk-addons-262069'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d9:3f:b2'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 00:06:31.345463   17911 main.go:143] libmachine: waiting for domain to start...
	I1217 00:06:31.346813   17911 main.go:143] libmachine: domain is now running
	I1217 00:06:31.346829   17911 main.go:143] libmachine: waiting for IP...
	I1217 00:06:31.347578   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:31.348170   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:31.348186   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:31.348482   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:31.348519   17911 retry.go:31] will retry after 237.694409ms: waiting for domain to come up
	I1217 00:06:31.588159   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:31.588772   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:31.588793   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:31.589225   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:31.589269   17911 retry.go:31] will retry after 332.822233ms: waiting for domain to come up
	I1217 00:06:31.924041   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:31.924709   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:31.924728   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:31.925115   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:31.925203   17911 retry.go:31] will retry after 351.790303ms: waiting for domain to come up
	I1217 00:06:32.279053   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:32.279624   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:32.279651   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:32.280061   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:32.280099   17911 retry.go:31] will retry after 427.603217ms: waiting for domain to come up
	I1217 00:06:32.709895   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:32.710435   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:32.710451   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:32.710775   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:32.710809   17911 retry.go:31] will retry after 686.480041ms: waiting for domain to come up
	I1217 00:06:33.398668   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:33.399225   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:33.399244   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:33.399552   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:33.399588   17911 retry.go:31] will retry after 794.514614ms: waiting for domain to come up
	I1217 00:06:34.195475   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:34.196071   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:34.196087   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:34.196358   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:34.196391   17911 retry.go:31] will retry after 1.179105994s: waiting for domain to come up
	I1217 00:06:35.377134   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:35.377747   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:35.377766   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:35.378115   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:35.378165   17911 retry.go:31] will retry after 1.065984921s: waiting for domain to come up
	I1217 00:06:36.445627   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:36.446286   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:36.446306   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:36.446612   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:36.446650   17911 retry.go:31] will retry after 1.365834942s: waiting for domain to come up
	I1217 00:06:37.814074   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:37.814577   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:37.814591   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:37.814876   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:37.814907   17911 retry.go:31] will retry after 1.648841511s: waiting for domain to come up
	I1217 00:06:39.465655   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:39.466372   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:39.466394   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:39.466758   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:39.466801   17911 retry.go:31] will retry after 2.17642133s: waiting for domain to come up
	I1217 00:06:41.646499   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:41.647063   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:41.647078   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:41.647353   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:41.647399   17911 retry.go:31] will retry after 3.466079888s: waiting for domain to come up
	I1217 00:06:45.114939   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:45.115377   17911 main.go:143] libmachine: no network interface addresses found for domain addons-262069 (source=lease)
	I1217 00:06:45.115392   17911 main.go:143] libmachine: trying to list again with source=arp
	I1217 00:06:45.115637   17911 main.go:143] libmachine: unable to find current IP address of domain addons-262069 in network mk-addons-262069 (interfaces detected: [])
	I1217 00:06:45.115666   17911 retry.go:31] will retry after 4.185434258s: waiting for domain to come up
	I1217 00:06:49.306253   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.306945   17911 main.go:143] libmachine: domain addons-262069 has current primary IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.306968   17911 main.go:143] libmachine: found domain IP: 192.168.39.183
	I1217 00:06:49.306978   17911 main.go:143] libmachine: reserving static IP address...
	I1217 00:06:49.307503   17911 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-262069", mac: "52:54:00:78:11:d8", ip: "192.168.39.183"} in network mk-addons-262069
	I1217 00:06:49.579728   17911 main.go:143] libmachine: reserved static IP address 192.168.39.183 for domain addons-262069
	I1217 00:06:49.579756   17911 main.go:143] libmachine: waiting for SSH...
	I1217 00:06:49.579764   17911 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 00:06:49.583518   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.584088   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:49.584136   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.584399   17911 main.go:143] libmachine: Using SSH client type: native
	I1217 00:06:49.584694   17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1217 00:06:49.584707   17911 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 00:06:49.694335   17911 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:06:49.694804   17911 main.go:143] libmachine: domain creation complete
	I1217 00:06:49.696808   17911 machine.go:94] provisionDockerMachine start ...
	I1217 00:06:49.699690   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.700207   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:49.700257   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.700484   17911 main.go:143] libmachine: Using SSH client type: native
	I1217 00:06:49.700717   17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1217 00:06:49.700731   17911 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:06:49.813425   17911 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 00:06:49.813465   17911 buildroot.go:166] provisioning hostname "addons-262069"
	I1217 00:06:49.816821   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.817335   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:49.817363   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.817561   17911 main.go:143] libmachine: Using SSH client type: native
	I1217 00:06:49.817743   17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1217 00:06:49.817755   17911 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-262069 && echo "addons-262069" | sudo tee /etc/hostname
	I1217 00:06:49.943763   17911 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-262069
	
	I1217 00:06:49.946937   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.947468   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:49.947503   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:49.947715   17911 main.go:143] libmachine: Using SSH client type: native
	I1217 00:06:49.948009   17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1217 00:06:49.948047   17911 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-262069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-262069/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-262069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:06:50.066107   17911 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:06:50.066143   17911 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 00:06:50.066191   17911 buildroot.go:174] setting up certificates
	I1217 00:06:50.066209   17911 provision.go:84] configureAuth start
	I1217 00:06:50.069525   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.070099   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.070138   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.073351   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.073864   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.073902   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.074158   17911 provision.go:143] copyHostCerts
	I1217 00:06:50.074249   17911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 00:06:50.074434   17911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 00:06:50.074576   17911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 00:06:50.074679   17911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.addons-262069 san=[127.0.0.1 192.168.39.183 addons-262069 localhost minikube]
	I1217 00:06:50.162585   17911 provision.go:177] copyRemoteCerts
	I1217 00:06:50.162655   17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:06:50.165053   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.165463   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.165485   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.165610   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:06:50.253682   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:06:50.288484   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 00:06:50.322785   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:06:50.357887   17911 provision.go:87] duration metric: took 291.645642ms to configureAuth
	I1217 00:06:50.357911   17911 buildroot.go:189] setting minikube options for container-runtime
	I1217 00:06:50.358145   17911 config.go:182] Loaded profile config "addons-262069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:06:50.361101   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.361524   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.361558   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.361818   17911 main.go:143] libmachine: Using SSH client type: native
	I1217 00:06:50.362047   17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1217 00:06:50.362070   17911 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:06:50.753241   17911 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:06:50.753268   17911 machine.go:97] duration metric: took 1.056439173s to provisionDockerMachine
	I1217 00:06:50.753277   17911 client.go:176] duration metric: took 21.353980905s to LocalClient.Create
	I1217 00:06:50.753296   17911 start.go:167] duration metric: took 21.354040963s to libmachine.API.Create "addons-262069"
	I1217 00:06:50.753305   17911 start.go:293] postStartSetup for "addons-262069" (driver="kvm2")
	I1217 00:06:50.753317   17911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:06:50.753375   17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:06:50.756514   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.756986   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.757046   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.757300   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:06:50.843163   17911 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:06:50.848946   17911 info.go:137] Remote host: Buildroot 2025.02
	I1217 00:06:50.848974   17911 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 00:06:50.849048   17911 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 00:06:50.849086   17911 start.go:296] duration metric: took 95.774347ms for postStartSetup
	I1217 00:06:50.880171   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.880746   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.880780   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.881106   17911 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/config.json ...
	I1217 00:06:50.881386   17911 start.go:128] duration metric: took 21.48394966s to createHost
	I1217 00:06:50.884160   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.884614   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.884673   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.884845   17911 main.go:143] libmachine: Using SSH client type: native
	I1217 00:06:50.885173   17911 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.183 22 <nil> <nil>}
	I1217 00:06:50.885193   17911 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 00:06:50.992119   17911 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765930010.958390996
	
	I1217 00:06:50.992204   17911 fix.go:216] guest clock: 1765930010.958390996
	I1217 00:06:50.992214   17911 fix.go:229] Guest: 2025-12-17 00:06:50.958390996 +0000 UTC Remote: 2025-12-17 00:06:50.881409729 +0000 UTC m=+21.584032290 (delta=76.981267ms)
	I1217 00:06:50.992238   17911 fix.go:200] guest clock delta is within tolerance: 76.981267ms
	I1217 00:06:50.992245   17911 start.go:83] releasing machines lock for "addons-262069", held for 21.594895966s
	I1217 00:06:50.995881   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.996398   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:50.996426   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:50.997036   17911 ssh_runner.go:195] Run: cat /version.json
	I1217 00:06:50.997111   17911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:06:51.000174   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:51.000341   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:51.000627   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:51.000653   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:51.000719   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:51.000748   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:51.000810   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:06:51.001051   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:06:51.077380   17911 ssh_runner.go:195] Run: systemctl --version
	I1217 00:06:51.106720   17911 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:06:51.688135   17911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:06:51.696822   17911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:06:51.696893   17911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:06:51.719872   17911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:06:51.719899   17911 start.go:496] detecting cgroup driver to use...
	I1217 00:06:51.719963   17911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:06:51.746757   17911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:06:51.766895   17911 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:06:51.766964   17911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:06:51.786707   17911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:06:51.808162   17911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:06:51.964974   17911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:06:52.182834   17911 docker.go:234] disabling docker service ...
	I1217 00:06:52.182901   17911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:06:52.200724   17911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:06:52.217612   17911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:06:52.389096   17911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:06:52.539146   17911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:06:52.556703   17911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:06:52.582599   17911 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:06:52.582692   17911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.596725   17911 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:06:52.596797   17911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.611153   17911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.625661   17911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.640879   17911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:06:52.656041   17911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.669426   17911 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.692636   17911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:06:52.708891   17911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:06:52.721811   17911 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 00:06:52.721875   17911 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 00:06:52.747842   17911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:06:52.761648   17911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:06:52.911574   17911 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:06:53.142300   17911 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:06:53.142419   17911 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:06:53.148213   17911 start.go:564] Will wait 60s for crictl version
	I1217 00:06:53.148293   17911 ssh_runner.go:195] Run: which crictl
	I1217 00:06:53.152721   17911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 00:06:53.189608   17911 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 00:06:53.189754   17911 ssh_runner.go:195] Run: crio --version
	I1217 00:06:53.219996   17911 ssh_runner.go:195] Run: crio --version
	I1217 00:06:53.305579   17911 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1217 00:06:53.317279   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:53.317802   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:06:53.317834   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:06:53.318076   17911 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 00:06:53.323499   17911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:06:53.340386   17911 kubeadm.go:884] updating cluster {Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:06:53.340527   17911 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 00:06:53.340578   17911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:06:53.373645   17911 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1217 00:06:53.373735   17911 ssh_runner.go:195] Run: which lz4
	I1217 00:06:53.378763   17911 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 00:06:53.384417   17911 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 00:06:53.384458   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1217 00:06:54.714134   17911 crio.go:462] duration metric: took 1.335442713s to copy over tarball
	I1217 00:06:54.714264   17911 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 00:06:56.278914   17911 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.564599763s)
	I1217 00:06:56.278956   17911 crio.go:469] duration metric: took 1.564785516s to extract the tarball
	I1217 00:06:56.278963   17911 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 00:06:56.317367   17911 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:06:56.359563   17911 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:06:56.359590   17911 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:06:56.360108   17911 kubeadm.go:935] updating node { 192.168.39.183 8443 v1.34.2 crio true true} ...
	I1217 00:06:56.360214   17911 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-262069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:06:56.360303   17911 ssh_runner.go:195] Run: crio config
	I1217 00:06:56.414892   17911 cni.go:84] Creating CNI manager for ""
	I1217 00:06:56.414923   17911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:06:56.414944   17911 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:06:56.414972   17911 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.183 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-262069 NodeName:addons-262069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:06:56.415142   17911 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-262069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:06:56.415217   17911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 00:06:56.428454   17911 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:06:56.428541   17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:06:56.441469   17911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1217 00:06:56.464193   17911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 00:06:56.487104   17911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1217 00:06:56.509096   17911 ssh_runner.go:195] Run: grep 192.168.39.183	control-plane.minikube.internal$ /etc/hosts
	I1217 00:06:56.513592   17911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.183	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:06:56.529346   17911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:06:56.670337   17911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:06:56.706988   17911 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069 for IP: 192.168.39.183
	I1217 00:06:56.707014   17911 certs.go:195] generating shared ca certs ...
	I1217 00:06:56.707042   17911 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.707233   17911 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 00:06:56.760158   17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt ...
	I1217 00:06:56.760187   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt: {Name:mkb2c08e9d46609296dd89647d95742b5db1a4b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.760369   17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key ...
	I1217 00:06:56.760382   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key: {Name:mk7cec444890283789c96bcbb8344d3796e24b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.760461   17911 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 00:06:56.826173   17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt ...
	I1217 00:06:56.826204   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt: {Name:mk8eaeff7b342ac9d7fbe6b921ae9ee04f8152f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.826365   17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key ...
	I1217 00:06:56.826377   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key: {Name:mk047258c3120e08a69c19fd6689532a7cadbd45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.826455   17911 certs.go:257] generating profile certs ...
	I1217 00:06:56.826510   17911 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.key
	I1217 00:06:56.826530   17911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt with IP's: []
	I1217 00:06:56.951623   17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt ...
	I1217 00:06:56.951651   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: {Name:mkb13e009b1a1654f88324d661c047a2b60d50be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.951802   17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.key ...
	I1217 00:06:56.951814   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.key: {Name:mkb3b6b6b215aa31da9d982cab9553641a45d235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:56.951879   17911 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266
	I1217 00:06:56.951897   17911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.183]
	I1217 00:06:57.170301   17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266 ...
	I1217 00:06:57.170329   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266: {Name:mkd98bc355df73c446b891110632f2910c5ace14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:57.170500   17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266 ...
	I1217 00:06:57.170514   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266: {Name:mk2a9e507d293c96915e5ee5adf189f03b6b2c0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:57.170584   17911 certs.go:382] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt.c5ec7266 -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt
	I1217 00:06:57.170649   17911 certs.go:386] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key.c5ec7266 -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key
	I1217 00:06:57.170695   17911 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key
	I1217 00:06:57.170711   17911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt with IP's: []
	I1217 00:06:57.274314   17911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt ...
	I1217 00:06:57.274343   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt: {Name:mkce03c36886d4cd2da2547442c30d7ce503940b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:57.274503   17911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key ...
	I1217 00:06:57.274514   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key: {Name:mk67157794ec591410a25272dec9e7070cac31fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:06:57.274673   17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:06:57.274707   17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:06:57.274731   17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:06:57.274760   17911 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 00:06:57.275303   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:06:57.309054   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:06:57.342756   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:06:57.379046   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:06:57.419351   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 00:06:57.458505   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 00:06:57.491196   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:06:57.523486   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:06:57.559687   17911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:06:57.593004   17911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:06:57.615695   17911 ssh_runner.go:195] Run: openssl version
	I1217 00:06:57.622651   17911 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:06:57.635985   17911 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:06:57.648884   17911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:06:57.654490   17911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:06:57.654574   17911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:06:57.662568   17911 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:06:57.675685   17911 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:06:57.688551   17911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:06:57.693797   17911 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:06:57.693852   17911 kubeadm.go:401] StartCluster: {Name:addons-262069 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-262069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:06:57.693930   17911 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:06:57.693983   17911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:06:57.731296   17911 cri.go:89] found id: ""
	I1217 00:06:57.731379   17911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:06:57.745397   17911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:06:57.758798   17911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:06:57.772141   17911 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:06:57.772161   17911 kubeadm.go:158] found existing configuration files:
	
	I1217 00:06:57.772215   17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 00:06:57.784210   17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:06:57.784276   17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:06:57.797070   17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 00:06:57.809671   17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:06:57.809734   17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:06:57.822571   17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 00:06:57.834619   17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:06:57.834683   17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:06:57.847701   17911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 00:06:57.860875   17911 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:06:57.860939   17911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:06:57.873821   17911 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 00:06:58.033541   17911 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:07:10.593721   17911 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1217 00:07:10.593794   17911 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:07:10.593896   17911 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:07:10.594076   17911 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:07:10.594204   17911 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:07:10.594287   17911 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:07:10.596155   17911 out.go:252]   - Generating certificates and keys ...
	I1217 00:07:10.596249   17911 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:07:10.596341   17911 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:07:10.596425   17911 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:07:10.596530   17911 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:07:10.596619   17911 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:07:10.596704   17911 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:07:10.596792   17911 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:07:10.596944   17911 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-262069 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1217 00:07:10.597040   17911 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:07:10.597189   17911 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-262069 localhost] and IPs [192.168.39.183 127.0.0.1 ::1]
	I1217 00:07:10.597270   17911 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:07:10.597366   17911 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:07:10.597427   17911 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:07:10.597510   17911 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:07:10.597593   17911 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:07:10.597673   17911 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:07:10.597760   17911 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:07:10.597874   17911 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:07:10.597938   17911 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:07:10.598010   17911 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:07:10.598108   17911 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:07:10.599830   17911 out.go:252]   - Booting up control plane ...
	I1217 00:07:10.599932   17911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:07:10.600046   17911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:07:10.600160   17911 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:07:10.600309   17911 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:07:10.600445   17911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:07:10.600577   17911 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:07:10.600682   17911 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:07:10.600733   17911 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:07:10.600903   17911 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:07:10.601057   17911 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:07:10.601142   17911 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50295449s
	I1217 00:07:10.601257   17911 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 00:07:10.601361   17911 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.183:8443/livez
	I1217 00:07:10.601483   17911 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 00:07:10.601585   17911 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 00:07:10.601684   17911 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.009421732s
	I1217 00:07:10.601777   17911 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.759014675s
	I1217 00:07:10.601868   17911 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.004757659s
	I1217 00:07:10.601990   17911 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 00:07:10.602163   17911 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 00:07:10.602257   17911 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 00:07:10.602432   17911 kubeadm.go:319] [mark-control-plane] Marking the node addons-262069 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 00:07:10.602521   17911 kubeadm.go:319] [bootstrap-token] Using token: uq1jlh.cbunlm48ja5dh288
	I1217 00:07:10.604152   17911 out.go:252]   - Configuring RBAC rules ...
	I1217 00:07:10.604262   17911 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 00:07:10.604403   17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 00:07:10.604554   17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 00:07:10.604742   17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 00:07:10.604913   17911 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 00:07:10.605047   17911 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 00:07:10.605188   17911 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 00:07:10.605258   17911 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 00:07:10.605341   17911 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 00:07:10.605349   17911 kubeadm.go:319] 
	I1217 00:07:10.605436   17911 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 00:07:10.605444   17911 kubeadm.go:319] 
	I1217 00:07:10.605544   17911 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 00:07:10.605553   17911 kubeadm.go:319] 
	I1217 00:07:10.605593   17911 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 00:07:10.605677   17911 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 00:07:10.605759   17911 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 00:07:10.605777   17911 kubeadm.go:319] 
	I1217 00:07:10.605858   17911 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 00:07:10.605872   17911 kubeadm.go:319] 
	I1217 00:07:10.605938   17911 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 00:07:10.605952   17911 kubeadm.go:319] 
	I1217 00:07:10.606042   17911 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 00:07:10.606169   17911 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 00:07:10.606271   17911 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 00:07:10.606280   17911 kubeadm.go:319] 
	I1217 00:07:10.606388   17911 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 00:07:10.606477   17911 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 00:07:10.606486   17911 kubeadm.go:319] 
	I1217 00:07:10.606597   17911 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uq1jlh.cbunlm48ja5dh288 \
	I1217 00:07:10.606747   17911 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:28cbf36bca9e367b0c14399fa9a279bc1d5d093a4138092f10e2eab3c16dce77 \
	I1217 00:07:10.606802   17911 kubeadm.go:319] 	--control-plane 
	I1217 00:07:10.606820   17911 kubeadm.go:319] 
	I1217 00:07:10.606944   17911 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 00:07:10.606960   17911 kubeadm.go:319] 
	I1217 00:07:10.607101   17911 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uq1jlh.cbunlm48ja5dh288 \
	I1217 00:07:10.607277   17911 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:28cbf36bca9e367b0c14399fa9a279bc1d5d093a4138092f10e2eab3c16dce77 
	I1217 00:07:10.607302   17911 cni.go:84] Creating CNI manager for ""
	I1217 00:07:10.607312   17911 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:07:10.609109   17911 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 00:07:10.610448   17911 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 00:07:10.628221   17911 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 00:07:10.653817   17911 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 00:07:10.653964   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-262069 minikube.k8s.io/updated_at=2025_12_17T00_07_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1 minikube.k8s.io/name=addons-262069 minikube.k8s.io/primary=true
	I1217 00:07:10.653971   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:10.733102   17911 ops.go:34] apiserver oom_adj: -16
	I1217 00:07:10.820642   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:11.321669   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:11.821721   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:12.321404   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:12.821428   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:13.320704   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:13.821538   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:14.321397   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:14.821286   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:15.321375   17911 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 00:07:15.413780   17911 kubeadm.go:1114] duration metric: took 4.759900031s to wait for elevateKubeSystemPrivileges
	I1217 00:07:15.413821   17911 kubeadm.go:403] duration metric: took 17.719971777s to StartCluster
	I1217 00:07:15.413841   17911 settings.go:142] acquiring lock: {Name:mk0fa06a6a557f0851b041158306daec92094c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:07:15.413977   17911 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:07:15.414444   17911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/kubeconfig: {Name:mk0867cff530c231805e36a9674d4fe6612173a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:07:15.414678   17911 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.183 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 00:07:15.414690   17911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 00:07:15.414719   17911 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 00:07:15.414841   17911 addons.go:70] Setting yakd=true in profile "addons-262069"
	I1217 00:07:15.414862   17911 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-262069"
	I1217 00:07:15.414882   17911 addons.go:239] Setting addon yakd=true in "addons-262069"
	I1217 00:07:15.414888   17911 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-262069"
	I1217 00:07:15.414912   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.414926   17911 addons.go:70] Setting cloud-spanner=true in profile "addons-262069"
	I1217 00:07:15.414932   17911 config.go:182] Loaded profile config "addons-262069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:15.414945   17911 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-262069"
	I1217 00:07:15.414974   17911 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-262069"
	I1217 00:07:15.414938   17911 addons.go:239] Setting addon cloud-spanner=true in "addons-262069"
	I1217 00:07:15.414991   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.415001   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.415106   17911 addons.go:70] Setting storage-provisioner=true in profile "addons-262069"
	I1217 00:07:15.415126   17911 addons.go:239] Setting addon storage-provisioner=true in "addons-262069"
	I1217 00:07:15.415134   17911 addons.go:70] Setting gcp-auth=true in profile "addons-262069"
	I1217 00:07:15.415154   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.415206   17911 mustload.go:66] Loading cluster: addons-262069
	I1217 00:07:15.415207   17911 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-262069"
	I1217 00:07:15.415219   17911 addons.go:70] Setting default-storageclass=true in profile "addons-262069"
	I1217 00:07:15.415229   17911 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-262069"
	I1217 00:07:15.415234   17911 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-262069"
	I1217 00:07:15.415254   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.415382   17911 config.go:182] Loaded profile config "addons-262069": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:07:15.415533   17911 addons.go:70] Setting registry=true in profile "addons-262069"
	I1217 00:07:15.415548   17911 addons.go:239] Setting addon registry=true in "addons-262069"
	I1217 00:07:15.415570   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.415939   17911 addons.go:70] Setting volcano=true in profile "addons-262069"
	I1217 00:07:15.415961   17911 addons.go:239] Setting addon volcano=true in "addons-262069"
	I1217 00:07:15.415986   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.414917   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.416100   17911 addons.go:70] Setting ingress=true in profile "addons-262069"
	I1217 00:07:15.416117   17911 addons.go:239] Setting addon ingress=true in "addons-262069"
	I1217 00:07:15.416148   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.416510   17911 addons.go:70] Setting volumesnapshots=true in profile "addons-262069"
	I1217 00:07:15.416516   17911 addons.go:70] Setting metrics-server=true in profile "addons-262069"
	I1217 00:07:15.416548   17911 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-262069"
	I1217 00:07:15.414850   17911 addons.go:70] Setting inspektor-gadget=true in profile "addons-262069"
	I1217 00:07:15.416563   17911 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-262069"
	I1217 00:07:15.416576   17911 addons.go:70] Setting ingress-dns=true in profile "addons-262069"
	I1217 00:07:15.416594   17911 addons.go:239] Setting addon ingress-dns=true in "addons-262069"
	I1217 00:07:15.416611   17911 addons.go:70] Setting registry-creds=true in profile "addons-262069"
	I1217 00:07:15.416624   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.416629   17911 addons.go:239] Setting addon registry-creds=true in "addons-262069"
	I1217 00:07:15.416652   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.416549   17911 addons.go:239] Setting addon metrics-server=true in "addons-262069"
	I1217 00:07:15.416737   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.416566   17911 addons.go:239] Setting addon inspektor-gadget=true in "addons-262069"
	I1217 00:07:15.416919   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.416532   17911 addons.go:239] Setting addon volumesnapshots=true in "addons-262069"
	I1217 00:07:15.417201   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.417620   17911 out.go:179] * Verifying Kubernetes components...
	I1217 00:07:15.419377   17911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:07:15.423461   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.423544   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 00:07:15.423548   17911 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1217 00:07:15.423627   17911 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1217 00:07:15.423763   17911 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:07:15.425060   17911 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 00:07:15.425076   17911 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 00:07:15.425085   17911 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 00:07:15.425097   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 00:07:15.425160   17911 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1217 00:07:15.425949   17911 addons.go:239] Setting addon default-storageclass=true in "addons-262069"
	I1217 00:07:15.425983   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.426142   17911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:07:15.426152   17911 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 00:07:15.426160   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W1217 00:07:15.425427   17911 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 00:07:15.427240   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 00:07:15.427275   17911 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:07:15.427287   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 00:07:15.427301   17911 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 00:07:15.427541   17911 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-262069"
	I1217 00:07:15.428176   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:15.428458   17911 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 00:07:15.428568   17911 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 00:07:15.429351   17911 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 00:07:15.429367   17911 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 00:07:15.429406   17911 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:07:15.429425   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 00:07:15.429521   17911 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 00:07:15.430213   17911 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 00:07:15.430249   17911 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:07:15.430649   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 00:07:15.431083   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 00:07:15.431089   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 00:07:15.431638   17911 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:07:15.431655   17911 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:07:15.431129   17911 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:07:15.431783   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 00:07:15.431140   17911 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:07:15.431953   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 00:07:15.431147   17911 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 00:07:15.432055   17911 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 00:07:15.432157   17911 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 00:07:15.432167   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 00:07:15.435919   17911 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:07:15.436004   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.436002   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.436033   17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 00:07:15.436600   17911 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 00:07:15.436962   17911 out.go:179]   - Using image docker.io/busybox:stable
	I1217 00:07:15.437729   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.438183   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.438623   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.438333   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.438747   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.439034   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 00:07:15.439935   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.440070   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.440194   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.440229   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.440226   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.440665   17911 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 00:07:15.440668   17911 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:07:15.441142   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.442161   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 00:07:15.442327   17911 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:07:15.442344   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 00:07:15.442344   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.442389   17911 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:07:15.442409   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 00:07:15.442747   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.442779   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.443454   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.443937   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.443987   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.445045   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.445269   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.445460   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 00:07:15.446378   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.446418   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.446447   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.446779   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.446815   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.446887   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.447273   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.447795   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.447843   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.447862   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.447925   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.447949   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.447999   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.448038   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.448127   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.448159   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.448525   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.448563   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.448865   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 00:07:15.448919   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.449305   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.449711   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.449742   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.449912   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.450262   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.451348   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.451379   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.451568   17911 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 00:07:15.451717   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.451838   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.452663   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 00:07:15.452680   17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 00:07:15.452730   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.452765   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.452967   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.453233   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.453673   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.453710   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.453878   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:15.455837   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.456327   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:15.456358   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:15.456549   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	W1217 00:07:15.639583   17911 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60052->192.168.39.183:22: read: connection reset by peer
	I1217 00:07:15.639621   17911 retry.go:31] will retry after 306.783579ms: ssh: handshake failed: read tcp 192.168.39.1:60052->192.168.39.183:22: read: connection reset by peer
	W1217 00:07:15.673610   17911 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60064->192.168.39.183:22: read: connection reset by peer
	I1217 00:07:15.673637   17911 retry.go:31] will retry after 222.936771ms: ssh: handshake failed: read tcp 192.168.39.1:60064->192.168.39.183:22: read: connection reset by peer
	W1217 00:07:15.676198   17911 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:60066->192.168.39.183:22: read: connection reset by peer
	I1217 00:07:15.676225   17911 retry.go:31] will retry after 167.114733ms: ssh: handshake failed: read tcp 192.168.39.1:60066->192.168.39.183:22: read: connection reset by peer
	I1217 00:07:16.058121   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 00:07:16.058657   17911 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 00:07:16.058680   17911 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 00:07:16.138043   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:07:16.164970   17911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:07:16.165080   17911 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 00:07:16.166109   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 00:07:16.209072   17911 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 00:07:16.209096   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 00:07:16.338376   17911 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 00:07:16.338410   17911 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 00:07:16.389123   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 00:07:16.449627   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 00:07:16.478976   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 00:07:16.481833   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 00:07:16.504160   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:07:16.648895   17911 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 00:07:16.648920   17911 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 00:07:16.779408   17911 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 00:07:16.779466   17911 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 00:07:16.803054   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 00:07:16.859711   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 00:07:16.859734   17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 00:07:17.056132   17911 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:07:17.056157   17911 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 00:07:17.079409   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 00:07:17.380824   17911 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 00:07:17.380857   17911 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 00:07:17.418239   17911 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 00:07:17.418269   17911 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 00:07:17.578626   17911 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:07:17.578653   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 00:07:17.737935   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 00:07:17.737989   17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 00:07:17.854536   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 00:07:17.892610   17911 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:07:17.892642   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 00:07:18.128210   17911 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 00:07:18.128252   17911 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 00:07:18.164557   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 00:07:18.457266   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 00:07:18.457302   17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 00:07:18.525965   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 00:07:18.553683   17911 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 00:07:18.553713   17911 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 00:07:18.847333   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 00:07:18.847357   17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 00:07:19.020352   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 00:07:19.020378   17911 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 00:07:19.268051   17911 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 00:07:19.268082   17911 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 00:07:19.499765   17911 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:07:19.499793   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 00:07:19.701749   17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 00:07:19.701773   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 00:07:19.917917   17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 00:07:19.917946   17911 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 00:07:20.028468   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:07:20.368063   17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 00:07:20.368091   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 00:07:20.577675   17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 00:07:20.577700   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 00:07:20.840316   17911 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:07:20.840348   17911 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 00:07:21.254726   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 00:07:22.980034   17911 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 00:07:22.983216   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:22.983697   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:22.983721   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:22.983896   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:23.167790   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (7.109626629s)
	I1217 00:07:23.167891   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.029816106s)
	I1217 00:07:23.167958   17911 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.002958804s)
	I1217 00:07:23.168044   17911 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.002905383s)
	I1217 00:07:23.168068   17911 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1217 00:07:23.168138   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.002003478s)
	I1217 00:07:23.168202   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.779034562s)
	I1217 00:07:23.168906   17911 node_ready.go:35] waiting up to 6m0s for node "addons-262069" to be "Ready" ...
	I1217 00:07:23.252435   17911 node_ready.go:49] node "addons-262069" is "Ready"
	I1217 00:07:23.252476   17911 node_ready.go:38] duration metric: took 83.538998ms for node "addons-262069" to be "Ready" ...
	I1217 00:07:23.252492   17911 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:07:23.252552   17911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:07:23.448636   17911 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 00:07:23.680247   17911 addons.go:239] Setting addon gcp-auth=true in "addons-262069"
	I1217 00:07:23.680307   17911 host.go:66] Checking if "addons-262069" exists ...
	I1217 00:07:23.682556   17911 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 00:07:23.685704   17911 main.go:143] libmachine: domain addons-262069 has defined MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:23.686221   17911 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:78:11:d8", ip: ""} in network mk-addons-262069: {Iface:virbr1 ExpiryTime:2025-12-17 01:06:45 +0000 UTC Type:0 Mac:52:54:00:78:11:d8 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:addons-262069 Clientid:01:52:54:00:78:11:d8}
	I1217 00:07:23.686261   17911 main.go:143] libmachine: domain addons-262069 has defined IP address 192.168.39.183 and MAC address 52:54:00:78:11:d8 in network mk-addons-262069
	I1217 00:07:23.686475   17911 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/addons-262069/id_rsa Username:docker}
	I1217 00:07:23.832783   17911 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-262069" context rescaled to 1 replicas
	I1217 00:07:24.234339   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.784676615s)
	I1217 00:07:24.234433   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.755423832s)
	I1217 00:07:24.234536   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.752676235s)
	I1217 00:07:24.234603   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.730411912s)
	I1217 00:07:24.234650   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.431569811s)
	W1217 00:07:24.341434   17911 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1217 00:07:26.043785   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.189209071s)
	I1217 00:07:26.043825   17911 addons.go:495] Verifying addon metrics-server=true in "addons-262069"
	I1217 00:07:26.043874   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.879282283s)
	I1217 00:07:26.043907   17911 addons.go:495] Verifying addon registry=true in "addons-262069"
	I1217 00:07:26.043926   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.517925943s)
	I1217 00:07:26.044838   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.965384944s)
	I1217 00:07:26.044865   17911 addons.go:495] Verifying addon ingress=true in "addons-262069"
	I1217 00:07:26.045503   17911 out.go:179] * Verifying registry addon...
	I1217 00:07:26.045503   17911 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-262069 service yakd-dashboard -n yakd-dashboard
	
	I1217 00:07:26.046465   17911 out.go:179] * Verifying ingress addon...
	I1217 00:07:26.048249   17911 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 00:07:26.048856   17911 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 00:07:26.077340   17911 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 00:07:26.077364   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:26.078257   17911 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 00:07:26.078278   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:26.527154   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.498639754s)
	W1217 00:07:26.527202   17911 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:07:26.527230   17911 retry.go:31] will retry after 288.69288ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 00:07:26.650042   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:26.669283   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:26.816202   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 00:07:27.116774   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:27.118059   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:27.459575   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.204763179s)
	I1217 00:07:27.459613   17911 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.207034804s)
	I1217 00:07:27.459627   17911 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-262069"
	I1217 00:07:27.459643   17911 api_server.go:72] duration metric: took 12.044939006s to wait for apiserver process to appear ...
	I1217 00:07:27.459651   17911 api_server.go:88] waiting for apiserver healthz status ...
	I1217 00:07:27.459671   17911 api_server.go:253] Checking apiserver healthz at https://192.168.39.183:8443/healthz ...
	I1217 00:07:27.459675   17911 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.777091314s)
	I1217 00:07:27.461875   17911 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 00:07:27.461891   17911 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 00:07:27.463433   17911 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 00:07:27.464103   17911 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 00:07:27.464857   17911 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 00:07:27.464877   17911 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 00:07:27.486295   17911 api_server.go:279] https://192.168.39.183:8443/healthz returned 200:
	ok
	I1217 00:07:27.492487   17911 api_server.go:141] control plane version: v1.34.2
	I1217 00:07:27.492524   17911 api_server.go:131] duration metric: took 32.866106ms to wait for apiserver health ...
	I1217 00:07:27.492534   17911 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 00:07:27.521251   17911 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 00:07:27.521280   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:27.522309   17911 system_pods.go:59] 20 kube-system pods found
	I1217 00:07:27.522341   17911 system_pods.go:61] "amd-gpu-device-plugin-h7ktx" [868af750-76b7-4d6a-8b9c-c20ef980f23c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:07:27.522352   17911 system_pods.go:61] "coredns-66bc5c9577-225dx" [d0273678-dce6-4db9-bdb2-ba3a3c08cdef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:07:27.522364   17911 system_pods.go:61] "coredns-66bc5c9577-qx99m" [1a417056-e982-4783-96a5-9b741dd696d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:07:27.522373   17911 system_pods.go:61] "csi-hostpath-attacher-0" [43ce0e61-2925-4f54-90f3-f9f854f69d01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:07:27.522379   17911 system_pods.go:61] "csi-hostpath-resizer-0" [8766aba1-494f-4e3d-92ae-fefb28e912b7] Pending
	I1217 00:07:27.522388   17911 system_pods.go:61] "csi-hostpathplugin-bl7k4" [8f24a367-b121-47d8-961b-5dc07a0a08db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:07:27.522395   17911 system_pods.go:61] "etcd-addons-262069" [94f8ab0a-9019-4c23-aa63-c66aee255be9] Running
	I1217 00:07:27.522402   17911 system_pods.go:61] "kube-apiserver-addons-262069" [5973422d-5e3e-40b5-88f8-ce163eec138a] Running
	I1217 00:07:27.522407   17911 system_pods.go:61] "kube-controller-manager-addons-262069" [95a9bcee-f05e-4599-9cb1-dff560827c59] Running
	I1217 00:07:27.522416   17911 system_pods.go:61] "kube-ingress-dns-minikube" [a72b7afb-8519-407e-93cc-fb6d4827edf6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:07:27.522422   17911 system_pods.go:61] "kube-proxy-pdf4s" [c6e7cf26-13ad-48d5-8dc7-8bdc4518f890] Running
	I1217 00:07:27.522431   17911 system_pods.go:61] "kube-scheduler-addons-262069" [52be5dac-ed10-4237-a532-22849ffcf509] Running
	I1217 00:07:27.522441   17911 system_pods.go:61] "metrics-server-85b7d694d7-94n2m" [9b665994-667f-4a3b-b44d-9949b0c4761c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:07:27.522451   17911 system_pods.go:61] "nvidia-device-plugin-daemonset-wb64t" [7e312275-8868-442b-bb94-0569b43cbe03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:07:27.522463   17911 system_pods.go:61] "registry-6b586f9694-z9bzt" [15209453-1113-446e-94b5-19d615f67036] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:07:27.522475   17911 system_pods.go:61] "registry-creds-764b6fb674-7r5ht" [8dac2506-ca74-4027-a05b-112bb00523e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:07:27.522484   17911 system_pods.go:61] "registry-proxy-ng2lx" [f39654e9-51f3-4325-9568-3999f3904260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:07:27.522493   17911 system_pods.go:61] "snapshot-controller-7d9fbc56b8-85jjc" [3452da8d-e4e0-4ca4-b768-3379b6b892c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:07:27.522506   17911 system_pods.go:61] "snapshot-controller-7d9fbc56b8-s748j" [e6a06856-562a-45d6-af80-78e109d24a5e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:07:27.522513   17911 system_pods.go:61] "storage-provisioner" [68b668f5-e60f-44f4-8df7-5378eb708ccc] Running
	I1217 00:07:27.522523   17911 system_pods.go:74] duration metric: took 29.982264ms to wait for pod list to return data ...
	I1217 00:07:27.522534   17911 default_sa.go:34] waiting for default service account to be created ...
	I1217 00:07:27.563155   17911 default_sa.go:45] found service account: "default"
	I1217 00:07:27.563179   17911 default_sa.go:55] duration metric: took 40.636257ms for default service account to be created ...
	I1217 00:07:27.563187   17911 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 00:07:27.590074   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:27.594764   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:27.595156   17911 system_pods.go:86] 20 kube-system pods found
	I1217 00:07:27.595184   17911 system_pods.go:89] "amd-gpu-device-plugin-h7ktx" [868af750-76b7-4d6a-8b9c-c20ef980f23c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 00:07:27.595194   17911 system_pods.go:89] "coredns-66bc5c9577-225dx" [d0273678-dce6-4db9-bdb2-ba3a3c08cdef] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:07:27.595215   17911 system_pods.go:89] "coredns-66bc5c9577-qx99m" [1a417056-e982-4783-96a5-9b741dd696d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 00:07:27.595228   17911 system_pods.go:89] "csi-hostpath-attacher-0" [43ce0e61-2925-4f54-90f3-f9f854f69d01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 00:07:27.595237   17911 system_pods.go:89] "csi-hostpath-resizer-0" [8766aba1-494f-4e3d-92ae-fefb28e912b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 00:07:27.595248   17911 system_pods.go:89] "csi-hostpathplugin-bl7k4" [8f24a367-b121-47d8-961b-5dc07a0a08db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 00:07:27.595254   17911 system_pods.go:89] "etcd-addons-262069" [94f8ab0a-9019-4c23-aa63-c66aee255be9] Running
	I1217 00:07:27.595260   17911 system_pods.go:89] "kube-apiserver-addons-262069" [5973422d-5e3e-40b5-88f8-ce163eec138a] Running
	I1217 00:07:27.595269   17911 system_pods.go:89] "kube-controller-manager-addons-262069" [95a9bcee-f05e-4599-9cb1-dff560827c59] Running
	I1217 00:07:27.595277   17911 system_pods.go:89] "kube-ingress-dns-minikube" [a72b7afb-8519-407e-93cc-fb6d4827edf6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 00:07:27.595282   17911 system_pods.go:89] "kube-proxy-pdf4s" [c6e7cf26-13ad-48d5-8dc7-8bdc4518f890] Running
	I1217 00:07:27.595288   17911 system_pods.go:89] "kube-scheduler-addons-262069" [52be5dac-ed10-4237-a532-22849ffcf509] Running
	I1217 00:07:27.595296   17911 system_pods.go:89] "metrics-server-85b7d694d7-94n2m" [9b665994-667f-4a3b-b44d-9949b0c4761c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 00:07:27.595305   17911 system_pods.go:89] "nvidia-device-plugin-daemonset-wb64t" [7e312275-8868-442b-bb94-0569b43cbe03] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 00:07:27.595323   17911 system_pods.go:89] "registry-6b586f9694-z9bzt" [15209453-1113-446e-94b5-19d615f67036] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 00:07:27.595332   17911 system_pods.go:89] "registry-creds-764b6fb674-7r5ht" [8dac2506-ca74-4027-a05b-112bb00523e9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 00:07:27.595341   17911 system_pods.go:89] "registry-proxy-ng2lx" [f39654e9-51f3-4325-9568-3999f3904260] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 00:07:27.595349   17911 system_pods.go:89] "snapshot-controller-7d9fbc56b8-85jjc" [3452da8d-e4e0-4ca4-b768-3379b6b892c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:07:27.595361   17911 system_pods.go:89] "snapshot-controller-7d9fbc56b8-s748j" [e6a06856-562a-45d6-af80-78e109d24a5e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 00:07:27.595367   17911 system_pods.go:89] "storage-provisioner" [68b668f5-e60f-44f4-8df7-5378eb708ccc] Running
	I1217 00:07:27.595376   17911 system_pods.go:126] duration metric: took 32.182806ms to wait for k8s-apps to be running ...
	I1217 00:07:27.595389   17911 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 00:07:27.595438   17911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:07:27.607341   17911 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 00:07:27.607372   17911 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 00:07:27.676753   17911 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:07:27.676780   17911 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 00:07:27.768809   17911 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 00:07:27.972240   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:28.054596   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:28.056732   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:28.468935   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:28.533122   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.716865508s)
	I1217 00:07:28.533180   17911 system_svc.go:56] duration metric: took 937.784294ms WaitForService to wait for kubelet
	I1217 00:07:28.533204   17911 kubeadm.go:587] duration metric: took 13.118498038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:07:28.533232   17911 node_conditions.go:102] verifying NodePressure condition ...
	I1217 00:07:28.539774   17911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 00:07:28.539825   17911 node_conditions.go:123] node cpu capacity is 2
	I1217 00:07:28.539847   17911 node_conditions.go:105] duration metric: took 6.608212ms to run NodePressure ...
	I1217 00:07:28.539863   17911 start.go:242] waiting for startup goroutines ...
	I1217 00:07:28.553473   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:28.554524   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:28.999577   17911 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.230727711s)
	I1217 00:07:29.000949   17911 addons.go:495] Verifying addon gcp-auth=true in "addons-262069"
	I1217 00:07:29.002934   17911 out.go:179] * Verifying gcp-auth addon...
	I1217 00:07:29.005094   17911 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 00:07:29.042158   17911 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 00:07:29.042188   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:29.042355   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:29.067224   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:29.080857   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:29.473399   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:29.512362   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:29.552671   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:29.557419   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:29.970767   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:30.014300   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:30.054996   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:30.055047   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:30.471181   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:30.510940   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:30.564680   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:30.564860   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:30.970464   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:31.015240   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:31.076276   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:31.079281   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:31.471412   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:31.570893   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:31.571280   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:31.572637   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:31.969395   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:32.011305   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:32.052696   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:32.056038   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:32.471445   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:32.512292   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:32.553879   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:32.553962   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:32.969358   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:33.013273   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:33.057251   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:33.057859   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:33.472261   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:33.569255   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:33.574109   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:33.574222   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:33.971003   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:34.010644   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:34.072501   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:34.072537   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:34.468875   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:34.509298   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:34.570588   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:34.571139   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:34.972720   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:35.009941   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:35.053140   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:35.054119   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:35.471063   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:35.509410   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:35.555736   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:35.558843   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:35.972347   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:36.011546   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:36.058257   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:36.062713   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:36.472999   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:36.514512   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:36.553748   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:36.554050   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:36.972555   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:37.015664   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:37.052848   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:37.053512   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:37.470244   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:37.574528   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:37.575162   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:37.575699   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:37.969611   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:38.011709   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:38.054922   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:38.057083   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:38.468425   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:38.508424   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:38.559737   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:38.561254   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:38.969313   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:39.012056   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:39.054166   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:39.056761   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:39.470884   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:39.512251   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:39.556044   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:39.557062   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:39.969538   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:40.010513   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:40.056688   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:40.056959   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:40.469650   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:40.511169   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:40.557368   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:40.558850   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:41.259916   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:41.260168   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:41.260171   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:41.264239   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:41.471462   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:41.511527   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:41.555469   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:41.555980   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:41.970975   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:42.010083   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:42.057642   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:42.062488   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:42.470877   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:42.509878   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:42.553476   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:42.553565   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:42.969260   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:43.012571   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:43.056865   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:43.060055   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:43.531191   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:43.531224   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:43.553797   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:43.557249   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:43.971917   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:44.019077   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:44.053902   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:44.057789   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:44.488323   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:44.514408   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:44.553272   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:44.554800   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:45.045001   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:45.045365   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:45.055144   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:45.057990   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:45.472257   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:45.513061   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:45.558944   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:45.560482   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:45.970331   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:46.015174   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:46.054644   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:46.058513   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:46.470734   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:46.510655   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:46.553569   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:46.554994   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:46.968868   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:47.009887   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:47.053569   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:47.053790   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:47.468891   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:47.510686   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:47.552893   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:47.552953   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:47.970246   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:48.014072   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:48.055170   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:48.057117   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:48.471174   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:48.509328   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:48.552937   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:48.556235   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:48.970179   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:49.011217   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:49.057591   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:49.058824   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:49.468196   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:49.508800   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:49.554820   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:49.558328   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:49.969540   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:50.009884   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:50.051990   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:50.053209   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:50.468847   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:50.509267   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:50.552710   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:50.554566   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:50.969369   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:51.014234   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:51.056878   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:51.057504   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:51.469130   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:51.510401   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:51.556212   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:51.558817   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:51.973892   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:52.009854   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:52.054791   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:52.055353   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:52.470441   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:52.510752   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:52.555734   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:52.555784   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:52.968764   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:53.009788   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:53.053007   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:53.053298   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:53.469533   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:53.508486   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:53.552210   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:53.554232   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:53.969955   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:54.009178   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:54.057674   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:54.058629   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 00:07:54.471903   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:54.509508   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:54.553879   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:54.557422   17911 kapi.go:107] duration metric: took 28.509174545s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 00:07:54.971358   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:55.009735   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:55.055734   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:55.469255   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:55.508425   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:55.579166   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:55.978059   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:56.015536   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:56.062652   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:56.474340   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:56.509394   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:56.554907   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:56.969166   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:57.009250   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:57.053589   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:57.476444   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:57.510305   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:57.555861   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:57.969310   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:58.013711   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:58.055842   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:58.469585   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:58.514571   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:58.553119   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:58.971756   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:59.009835   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:59.056749   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:59.469757   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:07:59.511728   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:07:59.554137   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:07:59.971284   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:00.012673   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:00.054011   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:00.472353   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:00.511550   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:00.553731   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:00.972995   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:01.010764   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:01.053100   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:01.488357   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:01.509975   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:01.554277   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:01.971269   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:02.011234   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:02.056597   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:02.475814   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:02.512259   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:02.614072   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:02.970593   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:03.010930   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:03.053932   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:03.470658   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:03.511436   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:03.556126   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:03.970729   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:04.018261   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:04.056001   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:04.469804   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:04.509283   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:04.554560   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:04.969258   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:05.011237   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:05.052809   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:05.472488   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:05.508429   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:05.567004   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:05.970218   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:06.009070   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:06.054821   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:06.471710   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:06.573582   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:06.574152   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:06.978809   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:07.013232   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:07.052988   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:07.467068   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:07.512956   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:07.553371   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:07.972518   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:08.072528   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:08.073356   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:08.476574   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:08.519301   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:08.558623   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:08.969946   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:09.012787   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:09.055728   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:09.469564   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:09.511178   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:09.555968   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:09.979287   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:10.076983   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:10.077150   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:10.469048   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:10.512947   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:10.556336   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:10.970999   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:11.014462   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:11.053571   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:11.476203   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:11.574896   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:11.575003   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:11.972255   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:12.012160   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:12.075209   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:12.468618   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:12.510036   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:12.554967   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:12.970328   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:13.014528   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:13.055264   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:13.475150   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:13.513871   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:13.581751   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:13.971462   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:14.009609   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:14.053817   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:14.570064   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:14.570191   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:14.571144   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:14.978604   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:15.076423   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:15.076429   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:15.480512   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:15.511903   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:15.578239   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:15.970448   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:16.013445   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:16.057961   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:16.469867   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:16.512087   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:16.554339   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:16.971922   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:17.011014   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:17.052966   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:17.469243   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:17.509399   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:17.555192   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:17.972321   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:18.020234   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:18.054373   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:18.472304   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:18.510741   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:18.556346   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:18.973157   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:19.008601   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:19.070752   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:19.471441   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:19.511336   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:19.554512   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:19.969096   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:20.008968   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:20.056104   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:20.474947   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:20.510417   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:20.555208   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:20.970771   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:21.068479   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:21.069291   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:21.470745   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:21.509824   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:21.555249   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:21.968516   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:22.008772   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:22.057314   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:22.471335   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 00:08:22.571671   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:22.571906   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:22.968208   17911 kapi.go:107] duration metric: took 55.504099804s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 00:08:23.010557   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:23.052999   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:23.508774   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:23.553503   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:24.009227   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:24.053502   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:24.510301   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:24.553164   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:25.008647   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:25.052906   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:25.509618   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:25.553668   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:26.009952   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:26.053329   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:26.509501   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:26.553798   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:27.010128   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:27.054112   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:27.509695   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:27.555314   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:28.009094   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:28.052736   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:28.509424   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:28.553089   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:29.009129   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:29.052669   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:29.510043   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:29.553256   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:30.013488   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:30.056057   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:30.508821   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:30.556323   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:31.012048   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:31.056588   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:31.512262   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:31.560472   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:32.015258   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:32.052990   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:32.514186   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:32.554089   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:33.011346   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:33.055098   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:33.508877   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:33.555467   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:34.014899   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:34.057481   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:34.513828   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:34.555227   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:35.011931   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:35.053287   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:35.510461   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:35.553166   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:36.010311   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:36.054203   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:36.511255   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:36.554447   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:37.010713   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:37.054440   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:37.510371   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:37.555966   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:38.009168   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:38.053430   17911 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 00:08:38.516403   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:38.565621   17911 kapi.go:107] duration metric: took 1m12.516758297s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 00:08:39.010096   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:39.515168   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:40.015245   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:40.510670   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:41.011147   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:41.510552   17911 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 00:08:42.009825   17911 kapi.go:107] duration metric: took 1m13.004727494s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 00:08:42.011806   17911 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-262069 cluster.
	I1217 00:08:42.013451   17911 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 00:08:42.014720   17911 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 00:08:42.016390   17911 out.go:179] * Enabled addons: inspektor-gadget, storage-provisioner, ingress-dns, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, cloud-spanner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1217 00:08:42.017557   17911 addons.go:530] duration metric: took 1m26.602845252s for enable addons: enabled=[inspektor-gadget storage-provisioner ingress-dns amd-gpu-device-plugin nvidia-device-plugin registry-creds cloud-spanner default-storageclass metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1217 00:08:42.017615   17911 start.go:247] waiting for cluster config update ...
	I1217 00:08:42.017645   17911 start.go:256] writing updated cluster config ...
	I1217 00:08:42.017992   17911 ssh_runner.go:195] Run: rm -f paused
	I1217 00:08:42.029830   17911 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:08:42.110359   17911 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-225dx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.115474   17911 pod_ready.go:94] pod "coredns-66bc5c9577-225dx" is "Ready"
	I1217 00:08:42.115513   17911 pod_ready.go:86] duration metric: took 5.121006ms for pod "coredns-66bc5c9577-225dx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.117735   17911 pod_ready.go:83] waiting for pod "etcd-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.123956   17911 pod_ready.go:94] pod "etcd-addons-262069" is "Ready"
	I1217 00:08:42.123984   17911 pod_ready.go:86] duration metric: took 6.214519ms for pod "etcd-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.126497   17911 pod_ready.go:83] waiting for pod "kube-apiserver-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.132058   17911 pod_ready.go:94] pod "kube-apiserver-addons-262069" is "Ready"
	I1217 00:08:42.132088   17911 pod_ready.go:86] duration metric: took 5.566687ms for pod "kube-apiserver-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.134190   17911 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.434687   17911 pod_ready.go:94] pod "kube-controller-manager-addons-262069" is "Ready"
	I1217 00:08:42.434722   17911 pod_ready.go:86] duration metric: took 300.501021ms for pod "kube-controller-manager-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:42.636062   17911 pod_ready.go:83] waiting for pod "kube-proxy-pdf4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:43.034447   17911 pod_ready.go:94] pod "kube-proxy-pdf4s" is "Ready"
	I1217 00:08:43.034482   17911 pod_ready.go:86] duration metric: took 398.388512ms for pod "kube-proxy-pdf4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:43.233990   17911 pod_ready.go:83] waiting for pod "kube-scheduler-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:43.634302   17911 pod_ready.go:94] pod "kube-scheduler-addons-262069" is "Ready"
	I1217 00:08:43.634338   17911 pod_ready.go:86] duration metric: took 400.293515ms for pod "kube-scheduler-addons-262069" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 00:08:43.634357   17911 pod_ready.go:40] duration metric: took 1.604489345s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 00:08:43.711172   17911 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 00:08:43.712755   17911 out.go:179] * Done! kubectl is now configured to use "addons-262069" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.212806499Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0106d92d-ac83-4ff6-aa21-39a08f015b5f name=/runtime.v1.RuntimeService/Version
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.214247864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=81498190-5c1e-4e7c-b3c9-e45ecd9c4d5b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.215750041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765930299215720682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:554377,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=81498190-5c1e-4e7c-b3c9-e45ecd9c4d5b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.216586787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f528892-828a-4e14-95ff-edc9a72ebd3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.216661187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f528892-828a-4e14-95ff-edc9a72ebd3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.217584622Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-5d498dc89-98t54_default_5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4_0 from container spec configuration to container runtime creation" file="resourcestore/resourcestore.go:227" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.217670705Z" level=debug msg="running conmon: /usr/bin/conmon" args="[-b /var/run/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata -c 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf --exit-dir /var/run/crio/exits -l /var/log/pods/default_hello-world-app-5d498dc89-98t54_5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4/hello-world-app/0.log --log-level debug -n k8s_hello-world-app_hello-world-app-5d498dc89-98t54_default_5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4_0 -P /var/run/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata/conmon-pidfile -p /var/run/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata/pidfile --persist-dir /var/lib/containers/storage/overlay-containers/96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf/userdata -r /usr/bin/ru
nc --runtime-arg --root=/run/runc --socket-dir-path /var/run/crio --syslog -u 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf]" file="oci/runtime_oci.go:168" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.219221518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da37136af4866b03c248d101bf3269e6e1507fe8823a2906d0743fa7e91a0fd0,PodSandboxId:95c9514edc6fdc5390e19cfcb6a451f0582ff2c73d50270cb9324b98a2a87e42,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765930155824691343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d05a1d3-b173-402d-b417-d11ed3f1e38b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47e8cf37ec48ffecac4366103fda90e67bbcfe4a41f098615a5749642e1e6c2,PodSandboxId:a0c6cedad82797adde6f3c570e1a006e2c0fdb2d4e546aa650c6e5516137527b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765930127335434595,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a6b0152-c8cd-4b61-8658-a844c2dedd65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d791d8371392dd47d7174e66361893c24207e1bf308c7ab82681f9de907ab776,PodSandboxId:3888987b0ab2bab41431a7c0bac1f7b6806bbfe59a0ac2a7f3f36a3856e4f748,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765930118225822713,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qhfmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8105d8cc-5b94-4c6a-bee1-54b1e14b6391,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a73bf3ec068e571089e68f105e4ab7e44acde052e9eb95de7b608a4fc09be6c,PodSandboxId:9480d585d56fbb92e05ff3308b81c006069c346d8aa9c21b5bd4fc7e4991197e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765930090669111589,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d56md,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9ae3f8a-34e3-471c-8324-23bee411de9d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99842303bb27853949e0f1665f8390690a352102ef3556aa78ab8080a15ac570,PodSandboxId:6178d285bfbd316561b170df74570c6719d0c89544a0c87043e7ec65f534e66a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765930089764496270,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nx5df,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36ee754c-16ce-4b51-a73b-e9b7f470849a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12b37b795e3391d909d4371fdf670fdeaf7ee2c6921a88491d91f4007f0bc0e,PodSandboxId:b3a444a50c80f6945a02f6ad9ce3b921129fddee6795b33c61bc26fba15308f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765930086244761959,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qdlpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cc8b49e9-68ff-4324-874a-662d24fed8c2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf166d852d17d2293c0962f69700a90a7d0de70a404f0a1d773b83e67bb68849,PodSandboxId:c4648e07535e2a80e2afe73a882d6f0bd6b561fd5979695b9a30bf3a345caa74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765930067342465709,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72b7afb-8519-407e-93cc-fb6d4827edf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76d2536ddb378019184abc273182b5c9efc0671d0e5a07283e39a77e7463bac,PodSandboxId:b6187cb2f2b25b1c6aa7a065827616b5afbecdadadd21d66e100baef0b18bc54,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765930054053557622,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h7ktx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868af750-76b7-4d6a-8b9c-c20ef980f23c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1fdba2b689a084415531f7474442db44470fbe88cccf6cc431a5d63e3e0f4e,PodSandboxId:b1304a0bf9b4b4914f299b6fc14724b72425d8a0fe187b3ef18eade6322683dc,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765930044628296994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b668f5-e60f-44f4-8df7-5378eb708ccc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1e5703f653f9d5f4dbdfde28ffe80fb515d7b12142a5417d50714466645732,PodSandboxId:fa54351057863bdcc9ea220db693cbcc7c16ab52d48588e0af8f15e9c57844a3,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765930037664588312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-225dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0273678-dce6-4db9-bdb2-ba3a3c08cdef,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6f4f5a400e23f398e0aab5335420b4e49cdde8aa1f8aa33525397d22505556,PodSandboxId:86c542fe315234e3a8bc67df05ce934e338a3c1040a4e5ccc2fbee483b264027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765930036784568111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6e7cf26-13ad-48d5-8dc7-8bdc4518f890,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2fbdca13e384bcbfd3fbdaf9d95ad5967c5096ccf2372b109699fddf5e0bba5,PodSandboxId:1b5db531bb4eb31668424c055dede534a8da8a5336328e8f28129ca22af6eb4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765930023925855744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f87735738bf609c468945d5b40c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375c642c900b45d46f1a83108aa9915adf2f8a5967893585a022990a60789ab1,PodSandboxId:d1115a328d57639bfb7928690a82aad17c808148b17f126b75a24f7667c5a552,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765930023873653127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f259204777a715bea40fd47e464c877,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67450bc656f73dd9235124553c7b9a80e9f2e5403b09204044ae68765e6cdd43,PodSandboxId:3ea99958ef1d6f741f302c568fe7f2b53e69e4333c5c83d3589e686c80feb199,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765930023894190560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9f337cf9
613b55c21a1b74e0c76d0b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108cbe18ef2e4cc687c754ecd34e7173cf6b37c68d3d441e41aa01b0f6b4ba3,PodSandboxId:cc00906076d954b41db3a94a7da98450d8204b4e09c237d59f3bb2e96bca3338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765930023869582556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c8a349d17e38fe2a6b518411e1f43b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f528892-828a-4e14-95ff-edc9a72ebd3a name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
	Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: terminal_ctrl_fd: 12
	Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: winsz read side: 16, winsz write side: 17
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.269826988Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb7c770a-8126-4f0e-a753-d11701f777da name=/runtime.v1.RuntimeService/Version
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.269924101Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb7c770a-8126-4f0e-a753-d11701f777da name=/runtime.v1.RuntimeService/Version
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.273943652Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b454a2d-9afe-45c5-9a6e-f604a977a55e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.277511613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765930299277285560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:554377,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b454a2d-9afe-45c5-9a6e-f604a977a55e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.278677154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ec42285-9a68-4a03-b6ab-40eed604748b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.278739289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ec42285-9a68-4a03-b6ab-40eed604748b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.280073261Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:da37136af4866b03c248d101bf3269e6e1507fe8823a2906d0743fa7e91a0fd0,PodSandboxId:95c9514edc6fdc5390e19cfcb6a451f0582ff2c73d50270cb9324b98a2a87e42,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765930155824691343,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2d05a1d3-b173-402d-b417-d11ed3f1e38b,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47e8cf37ec48ffecac4366103fda90e67bbcfe4a41f098615a5749642e1e6c2,PodSandboxId:a0c6cedad82797adde6f3c570e1a006e2c0fdb2d4e546aa650c6e5516137527b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765930127335434595,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8a6b0152-c8cd-4b61-8658-a844c2dedd65,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d791d8371392dd47d7174e66361893c24207e1bf308c7ab82681f9de907ab776,PodSandboxId:3888987b0ab2bab41431a7c0bac1f7b6806bbfe59a0ac2a7f3f36a3856e4f748,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765930118225822713,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-qhfmc,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8105d8cc-5b94-4c6a-bee1-54b1e14b6391,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5a73bf3ec068e571089e68f105e4ab7e44acde052e9eb95de7b608a4fc09be6c,PodSandboxId:9480d585d56fbb92e05ff3308b81c006069c346d8aa9c21b5bd4fc7e4991197e,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e
,State:CONTAINER_EXITED,CreatedAt:1765930090669111589,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-d56md,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9ae3f8a-34e3-471c-8324-23bee411de9d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99842303bb27853949e0f1665f8390690a352102ef3556aa78ab8080a15ac570,PodSandboxId:6178d285bfbd316561b170df74570c6719d0c89544a0c87043e7ec65f534e66a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7
e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765930089764496270,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-nx5df,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36ee754c-16ce-4b51-a73b-e9b7f470849a,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d12b37b795e3391d909d4371fdf670fdeaf7ee2c6921a88491d91f4007f0bc0e,PodSandboxId:b3a444a50c80f6945a02f6ad9ce3b921129fddee6795b33c61bc26fba15308f2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765930086244761959,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-qdlpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: cc8b49e9-68ff-4324-874a-662d24fed8c2,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf166d852d17d2293c0962f69700a90a7d0de70a404f0a1d773b83e67bb68849,PodSandboxId:c4648e07535e2a80e2afe73a882d6f0bd6b561fd5979695b9a30bf3a345caa74,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7
,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765930067342465709,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72b7afb-8519-407e-93cc-fb6d4827edf6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e76d2536ddb378019184abc273182b5c9efc0671d0e5a07283e39a77e7463bac,PodSandboxId:b6187cb2f2b25b1c6aa7a065827616b5afbecdadadd21d66e100baef0b18bc54,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0
,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765930054053557622,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-h7ktx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 868af750-76b7-4d6a-8b9c-c20ef980f23c,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd1fdba2b689a084415531f7474442db44470fbe88cccf6cc431a5d63e3e0f4e,PodSandboxId:b1304a0bf9b4b4914f299b6fc14724b72425d8a0fe187b3ef18eade6322683dc,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765930044628296994,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68b668f5-e60f-44f4-8df7-5378eb708ccc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf1e5703f653f9d5f4dbdfde28ffe80fb515d7b12142a5417d50714466645732,PodSandboxId:fa54351057863bdcc9ea220db693cbcc7c16ab52d48588e0af8f15e9c57844a3,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765930037664588312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-225dx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0273678-dce6-4db9-bdb2-ba3a3c08cdef,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a6f4f5a400e23f398e0aab5335420b4e49cdde8aa1f8aa33525397d22505556,PodSandboxId:86c542fe315234e3a8bc67df05ce934e338a3c1040a4e5ccc2fbee483b264027,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765930036784568111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pdf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6e7cf26-13ad-48d5-8dc7-8bdc4518f890,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/ter
mination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2fbdca13e384bcbfd3fbdaf9d95ad5967c5096ccf2372b109699fddf5e0bba5,PodSandboxId:1b5db531bb4eb31668424c055dede534a8da8a5336328e8f28129ca22af6eb4b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765930023925855744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 70f87735738bf609c468945d5b40c70e,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:375c642c900b45d46f1a83108aa9915adf2f8a5967893585a022990a60789ab1,PodSandboxId:d1115a328d57639bfb7928690a82aad17c808148b17f126b75a24f7667c5a552,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765930023873653127,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f259204777a715bea40fd47e464c877,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernete
s.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67450bc656f73dd9235124553c7b9a80e9f2e5403b09204044ae68765e6cdd43,PodSandboxId:3ea99958ef1d6f741f302c568fe7f2b53e69e4333c5c83d3589e686c80feb199,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765930023894190560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9f337cf9
613b55c21a1b74e0c76d0b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2108cbe18ef2e4cc687c754ecd34e7173cf6b37c68d3d441e41aa01b0f6b4ba3,PodSandboxId:cc00906076d954b41db3a94a7da98450d8204b4e09c237d59f3bb2e96bca3338,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765930023869582556,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-addons-262069,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0c8a349d17e38fe2a6b518411e1f43b,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ec42285-9a68-4a03-b6ab-40eed604748b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:11:39 addons-262069 conmon[12395]: conmon 96af083870aa83e5678a <ndebug>: container PID: 12412
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.282151423Z" level=debug msg="Received container pid: 12412" file="oci/runtime_oci.go:284" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.297354046Z" level=info msg="Created container 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf: default/hello-world-app-5d498dc89-98t54/hello-world-app" file="server/container_create.go:491" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.297496983Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf,}" file="otel-collector/interceptors.go:74" id=a60b09d6-158c-436a-8b1c-c5be80434b30 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.298254689Z" level=debug msg="Request: &StartContainerRequest{ContainerId:96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf,}" file="otel-collector/interceptors.go:62" id=5dbabbd6-71d3-40c2-a558-c3753f40f9c6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.298635939Z" level=info msg="Starting container: 96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf" file="server/container_start.go:21" id=5dbabbd6-71d3-40c2-a558-c3753f40f9c6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 00:11:39 addons-262069 crio[821]: time="2025-12-17 00:11:39.317718831Z" level=info msg="Started container" PID=12412 containerID=96af083870aa83e5678ae1706f84ca43c468921acb244c7effa6c016aa127bcf description=default/hello-world-app-5d498dc89-98t54/hello-world-app file="server/container_start.go:115" id=5dbabbd6-71d3-40c2-a558-c3753f40f9c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d299adcafa430073f3f1a037770c2f02c3f7d0156034321e47fe98b887e2c890
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	96af083870aa8       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   d299adcafa430       hello-world-app-5d498dc89-98t54             default
	da37136af4866       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago            Running             nginx                     0                   95c9514edc6fd       nginx                                       default
	c47e8cf37ec48       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   a0c6cedad8279       busybox                                     default
	d791d8371392d       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago            Running             controller                0                   3888987b0ab2b       ingress-nginx-controller-85d4c799dd-qhfmc   ingress-nginx
	5a73bf3ec068e       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                             3 minutes ago            Exited              patch                     1                   9480d585d56fb       ingress-nginx-admission-patch-d56md         ingress-nginx
	99842303bb278       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago            Exited              create                    0                   6178d285bfbd3       ingress-nginx-admission-create-nx5df        ingress-nginx
	d12b37b795e33       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago            Running             local-path-provisioner    0                   b3a444a50c80f       local-path-provisioner-648f6765c9-qdlpj     local-path-storage
	cf166d852d17d       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   c4648e07535e2       kube-ingress-dns-minikube                   kube-system
	e76d2536ddb37       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   b6187cb2f2b25       amd-gpu-device-plugin-h7ktx                 kube-system
	dd1fdba2b689a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   b1304a0bf9b4b       storage-provisioner                         kube-system
	bf1e5703f653f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago            Running             coredns                   0                   fa54351057863       coredns-66bc5c9577-225dx                    kube-system
	1a6f4f5a400e2       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago            Running             kube-proxy                0                   86c542fe31523       kube-proxy-pdf4s                            kube-system
	f2fbdca13e384       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago            Running             etcd                      0                   1b5db531bb4eb       etcd-addons-262069                          kube-system
	67450bc656f73       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago            Running             kube-scheduler            0                   3ea99958ef1d6       kube-scheduler-addons-262069                kube-system
	375c642c900b4       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago            Running             kube-controller-manager   0                   d1115a328d576       kube-controller-manager-addons-262069       kube-system
	2108cbe18ef2e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago            Running             kube-apiserver            0                   cc00906076d95       kube-apiserver-addons-262069                kube-system
	
	
	==> coredns [bf1e5703f653f9d5f4dbdfde28ffe80fb515d7b12142a5417d50714466645732] <==
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:32993 - 2111 "HINFO IN 2496638363767256317.4362979049479113296. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022927187s
	[INFO] 10.244.0.23:49586 - 37635 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000517219s
	[INFO] 10.244.0.23:52028 - 33310 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.003110323s
	[INFO] 10.244.0.23:53126 - 40773 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156478s
	[INFO] 10.244.0.23:45334 - 26517 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179076s
	[INFO] 10.244.0.23:36825 - 22047 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000204358s
	[INFO] 10.244.0.23:33288 - 65522 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000170427s
	[INFO] 10.244.0.23:42587 - 17685 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005228337s
	[INFO] 10.244.0.23:56158 - 10600 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.005586035s
	[INFO] 10.244.0.28:37036 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00038996s
	[INFO] 10.244.0.28:60967 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000184943s
	
	
	==> describe nodes <==
	Name:               addons-262069
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-262069
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=addons-262069
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_07_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-262069
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:07:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-262069
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:11:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:09:43 +0000   Wed, 17 Dec 2025 00:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:09:43 +0000   Wed, 17 Dec 2025 00:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:09:43 +0000   Wed, 17 Dec 2025 00:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:09:43 +0000   Wed, 17 Dec 2025 00:07:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.183
	  Hostname:    addons-262069
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c11e3475a3334013be6a553f88d11a60
	  System UUID:                c11e3475-a333-4013-be6a-553f88d11a60
	  Boot ID:                    d44a487a-f7ff-4581-bcd5-fa72f4800bda
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     hello-world-app-5d498dc89-98t54              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-qhfmc    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m14s
	  kube-system                 amd-gpu-device-plugin-h7ktx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 coredns-66bc5c9577-225dx                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m23s
	  kube-system                 etcd-addons-262069                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m31s
	  kube-system                 kube-apiserver-addons-262069                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-controller-manager-addons-262069        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-pdf4s                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-scheduler-addons-262069                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  local-path-storage          local-path-provisioner-648f6765c9-qdlpj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m21s  kube-proxy       
	  Normal  Starting                 4m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m29s  kubelet          Node addons-262069 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m29s  kubelet          Node addons-262069 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m29s  kubelet          Node addons-262069 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m28s  kubelet          Node addons-262069 status is now: NodeReady
	  Normal  RegisteredNode           4m25s  node-controller  Node addons-262069 event: Registered Node addons-262069 in Controller
	
	
	==> dmesg <==
	[  +0.490734] kauditd_printk_skb: 251 callbacks suppressed
	[  +0.370480] kauditd_printk_skb: 368 callbacks suppressed
	[  +8.016002] kauditd_printk_skb: 110 callbacks suppressed
	[  +8.254233] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.861974] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.557112] kauditd_printk_skb: 32 callbacks suppressed
	[Dec17 00:08] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.752699] kauditd_printk_skb: 131 callbacks suppressed
	[  +3.727066] kauditd_printk_skb: 142 callbacks suppressed
	[  +5.598833] kauditd_printk_skb: 90 callbacks suppressed
	[  +0.000068] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.000124] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.149438] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.453226] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.743783] kauditd_printk_skb: 17 callbacks suppressed
	[Dec17 00:09] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.668012] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.000111] kauditd_printk_skb: 109 callbacks suppressed
	[  +1.209565] kauditd_printk_skb: 129 callbacks suppressed
	[  +0.308204] kauditd_printk_skb: 128 callbacks suppressed
	[  +0.306667] kauditd_printk_skb: 124 callbacks suppressed
	[  +4.538301] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.211458] kauditd_printk_skb: 93 callbacks suppressed
	[  +0.684805] kauditd_printk_skb: 78 callbacks suppressed
	[Dec17 00:11] kauditd_printk_skb: 71 callbacks suppressed
	
	
	==> etcd [f2fbdca13e384bcbfd3fbdaf9d95ad5967c5096ccf2372b109699fddf5e0bba5] <==
	{"level":"warn","ts":"2025-12-17T00:07:41.248301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.267785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T00:07:41.248325Z","caller":"traceutil/trace.go:172","msg":"trace[1789477153] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:919; }","duration":"204.294573ms","start":"2025-12-17T00:07:41.044025Z","end":"2025-12-17T00:07:41.248320Z","steps":["trace[1789477153] 'agreement among raft nodes before linearized reading'  (duration: 204.251767ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:07:43.518170Z","caller":"traceutil/trace.go:172","msg":"trace[870624884] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"211.032364ms","start":"2025-12-17T00:07:43.307124Z","end":"2025-12-17T00:07:43.518156Z","steps":["trace[870624884] 'process raft request'  (duration: 210.683554ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:07:44.901478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:07:44.950594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:07:44.987938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45448","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:07:45.011463Z","caller":"traceutil/trace.go:172","msg":"trace[1211641140] transaction","detail":"{read_only:false; response_revision:923; number_of_response:1; }","duration":"199.184083ms","start":"2025-12-17T00:07:44.812267Z","end":"2025-12-17T00:07:45.011451Z","steps":["trace[1211641140] 'process raft request'  (duration: 199.06416ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:07:45.050200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45470","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:07:55.347228Z","caller":"traceutil/trace.go:172","msg":"trace[1250803468] transaction","detail":"{read_only:false; response_revision:955; number_of_response:1; }","duration":"156.29914ms","start":"2025-12-17T00:07:55.190916Z","end":"2025-12-17T00:07:55.347215Z","steps":["trace[1250803468] 'process raft request'  (duration: 156.19955ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:07:59.732386Z","caller":"traceutil/trace.go:172","msg":"trace[633269512] transaction","detail":"{read_only:false; response_revision:974; number_of_response:1; }","duration":"171.300091ms","start":"2025-12-17T00:07:59.561074Z","end":"2025-12-17T00:07:59.732374Z","steps":["trace[633269512] 'process raft request'  (duration: 171.112972ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:07:59.742239Z","caller":"traceutil/trace.go:172","msg":"trace[1570570202] transaction","detail":"{read_only:false; response_revision:975; number_of_response:1; }","duration":"160.705473ms","start":"2025-12-17T00:07:59.581522Z","end":"2025-12-17T00:07:59.742227Z","steps":["trace[1570570202] 'process raft request'  (duration: 160.621537ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:08:03.927302Z","caller":"traceutil/trace.go:172","msg":"trace[828096755] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"118.709888ms","start":"2025-12-17T00:08:03.808393Z","end":"2025-12-17T00:08:03.927103Z","steps":["trace[828096755] 'process raft request'  (duration: 118.324583ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:08:26.347398Z","caller":"traceutil/trace.go:172","msg":"trace[625907743] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"179.798685ms","start":"2025-12-17T00:08:26.167580Z","end":"2025-12-17T00:08:26.347379Z","steps":["trace[625907743] 'process raft request'  (duration: 179.704746ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:08:44.267267Z","caller":"traceutil/trace.go:172","msg":"trace[178099999] transaction","detail":"{read_only:false; response_revision:1188; number_of_response:1; }","duration":"147.01678ms","start":"2025-12-17T00:08:44.120238Z","end":"2025-12-17T00:08:44.267255Z","steps":["trace[178099999] 'process raft request'  (duration: 146.90828ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:09:08.972954Z","caller":"traceutil/trace.go:172","msg":"trace[1268266188] linearizableReadLoop","detail":"{readStateIndex:1377; appliedIndex:1377; }","duration":"236.606952ms","start":"2025-12-17T00:09:08.736318Z","end":"2025-12-17T00:09:08.972925Z","steps":["trace[1268266188] 'read index received'  (duration: 236.600783ms)","trace[1268266188] 'applied index is now lower than readState.Index'  (duration: 5.234µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T00:09:08.973162Z","caller":"traceutil/trace.go:172","msg":"trace[533549602] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"285.016673ms","start":"2025-12-17T00:09:08.688115Z","end":"2025-12-17T00:09:08.973132Z","steps":["trace[533549602] 'process raft request'  (duration: 284.896937ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T00:09:08.973309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.938261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2025-12-17T00:09:08.973335Z","caller":"traceutil/trace.go:172","msg":"trace[1937506425] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1335; }","duration":"237.017925ms","start":"2025-12-17T00:09:08.736312Z","end":"2025-12-17T00:09:08.973330Z","steps":["trace[1937506425] 'agreement among raft nodes before linearized reading'  (duration: 236.852255ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:09:08.976342Z","caller":"traceutil/trace.go:172","msg":"trace[42840466] transaction","detail":"{read_only:false; response_revision:1336; number_of_response:1; }","duration":"211.286948ms","start":"2025-12-17T00:09:08.765043Z","end":"2025-12-17T00:09:08.976330Z","steps":["trace[42840466] 'process raft request'  (duration: 210.530419ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:09:15.629195Z","caller":"traceutil/trace.go:172","msg":"trace[1783732182] linearizableReadLoop","detail":"{readStateIndex:1460; appliedIndex:1460; }","duration":"124.261047ms","start":"2025-12-17T00:09:15.504918Z","end":"2025-12-17T00:09:15.629179Z","steps":["trace[1783732182] 'read index received'  (duration: 124.255347ms)","trace[1783732182] 'applied index is now lower than readState.Index'  (duration: 5.128µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T00:09:15.629368Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.42403ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T00:09:15.629394Z","caller":"traceutil/trace.go:172","msg":"trace[1675442169] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1414; }","duration":"124.47451ms","start":"2025-12-17T00:09:15.504914Z","end":"2025-12-17T00:09:15.629388Z","steps":["trace[1675442169] 'agreement among raft nodes before linearized reading'  (duration: 124.393254ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:09:15.629725Z","caller":"traceutil/trace.go:172","msg":"trace[1851335785] transaction","detail":"{read_only:false; response_revision:1415; number_of_response:1; }","duration":"211.627983ms","start":"2025-12-17T00:09:15.418086Z","end":"2025-12-17T00:09:15.629714Z","steps":["trace[1851335785] 'process raft request'  (duration: 211.51416ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:09:20.573530Z","caller":"traceutil/trace.go:172","msg":"trace[360089175] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1464; }","duration":"147.845395ms","start":"2025-12-17T00:09:20.425672Z","end":"2025-12-17T00:09:20.573517Z","steps":["trace[360089175] 'process raft request'  (duration: 147.402934ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T00:09:20.578788Z","caller":"traceutil/trace.go:172","msg":"trace[1569039063] transaction","detail":"{read_only:false; response_revision:1465; number_of_response:1; }","duration":"103.462518ms","start":"2025-12-17T00:09:20.475314Z","end":"2025-12-17T00:09:20.578777Z","steps":["trace[1569039063] 'process raft request'  (duration: 103.256919ms)"],"step_count":1}
	
	
	==> kernel <==
	 00:11:39 up 5 min,  0 users,  load average: 0.89, 1.63, 0.82
	Linux addons-262069 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2108cbe18ef2e4cc687c754ecd34e7173cf6b37c68d3d441e41aa01b0f6b4ba3] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1217 00:08:02.511937       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1217 00:08:54.836619       1 conn.go:339] Error on socket receive: read tcp 192.168.39.183:8443->192.168.39.1:46146: use of closed network connection
	E1217 00:08:55.095847       1 conn.go:339] Error on socket receive: read tcp 192.168.39.183:8443->192.168.39.1:46178: use of closed network connection
	I1217 00:09:04.410386       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.8.95"}
	I1217 00:09:10.143227       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 00:09:10.389337       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.199.61"}
	I1217 00:09:21.949277       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 00:09:48.897162       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 00:09:48.897222       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 00:09:48.950537       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 00:09:48.950575       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 00:09:48.982783       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 00:09:48.982892       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 00:09:49.009460       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 00:09:49.009806       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1217 00:09:49.032658       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1217 00:09:49.034541       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1217 00:09:50.009521       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1217 00:09:50.032878       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1217 00:09:50.052208       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	I1217 00:10:03.484790       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1217 00:11:37.923884       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.91.62"}
	
	
	==> kube-controller-manager [375c642c900b45d46f1a83108aa9915adf2f8a5967893585a022990a60789ab1] <==
	E1217 00:09:59.432096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:10:06.285681       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:06.286915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:10:09.660841       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:09.662100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:10:10.225713       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:10.226978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1217 00:10:16.106247       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 00:10:16.106367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 00:10:16.119946       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 00:10:16.120037       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1217 00:10:26.173494       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:26.174881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:10:32.166765       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:32.168534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:10:32.464493       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:32.465829       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:10:54.920675       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:10:54.921677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:11:05.906154       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:11:05.907342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:11:08.365779       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:11:08.367145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1217 00:11:38.002098       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1217 00:11:38.003979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [1a6f4f5a400e23f398e0aab5335420b4e49cdde8aa1f8aa33525397d22505556] <==
	I1217 00:07:17.538277       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 00:07:17.639224       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 00:07:17.639292       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.183"]
	E1217 00:07:17.639376       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:07:17.888345       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 00:07:17.888482       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 00:07:17.888514       1 server_linux.go:132] "Using iptables Proxier"
	I1217 00:07:17.939148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:07:17.972084       1 server.go:527] "Version info" version="v1.34.2"
	I1217 00:07:17.973141       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:07:17.994438       1 config.go:200] "Starting service config controller"
	I1217 00:07:17.994471       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:07:18.009344       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:07:18.009379       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:07:18.019518       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:07:18.019544       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:07:18.024649       1 config.go:309] "Starting node config controller"
	I1217 00:07:18.024677       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:07:18.024685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 00:07:18.094734       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:07:18.109493       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:07:18.120488       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [67450bc656f73dd9235124553c7b9a80e9f2e5403b09204044ae68765e6cdd43] <==
	I1217 00:07:06.910488       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1217 00:07:06.921661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:07:06.922970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:07:06.923277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:07:06.923621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 00:07:06.923758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:07:07.761322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 00:07:07.780271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 00:07:07.823200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 00:07:07.829033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 00:07:07.844191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 00:07:07.844544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 00:07:07.873929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 00:07:07.896041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 00:07:07.916557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 00:07:07.947708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 00:07:08.008636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 00:07:08.059886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 00:07:08.070234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 00:07:08.143105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 00:07:08.166079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 00:07:08.178854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 00:07:08.346275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 00:07:08.397299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1217 00:07:10.414867       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 00:10:10 addons-262069 kubelet[1517]: E1217 00:10:10.332452    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930210331597822 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:10 addons-262069 kubelet[1517]: I1217 00:10:10.890843    1517 scope.go:117] "RemoveContainer" containerID="78245a5420d8fa0b275c5f14ee3e75b7270143c9c09cd5c50c30b40c4b12186b"
	Dec 17 00:10:11 addons-262069 kubelet[1517]: I1217 00:10:11.012348    1517 scope.go:117] "RemoveContainer" containerID="83a58e432b5bf13cea6a5479cfea58185824d3be99f5929902f17f1b0998fdec"
	Dec 17 00:10:11 addons-262069 kubelet[1517]: I1217 00:10:11.136159    1517 scope.go:117] "RemoveContainer" containerID="dca63ea6f9078b26c9bf53e0b061560f42747a5396684bf67075852ac056e440"
	Dec 17 00:10:17 addons-262069 kubelet[1517]: I1217 00:10:17.004501    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-h7ktx" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:10:20 addons-262069 kubelet[1517]: E1217 00:10:20.335485    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930220334813484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:20 addons-262069 kubelet[1517]: E1217 00:10:20.335511    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930220334813484 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:28 addons-262069 kubelet[1517]: I1217 00:10:28.004663    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-225dx" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:10:30 addons-262069 kubelet[1517]: E1217 00:10:30.338658    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930230338218971 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:30 addons-262069 kubelet[1517]: E1217 00:10:30.338701    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930230338218971 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:40 addons-262069 kubelet[1517]: E1217 00:10:40.341691    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930240341122443 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:40 addons-262069 kubelet[1517]: E1217 00:10:40.341731    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930240341122443 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:50 addons-262069 kubelet[1517]: E1217 00:10:50.344878    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930250344375260 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:10:50 addons-262069 kubelet[1517]: E1217 00:10:50.344908    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930250344375260 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:00 addons-262069 kubelet[1517]: E1217 00:11:00.348287    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930260347749150 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:00 addons-262069 kubelet[1517]: E1217 00:11:00.348321    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930260347749150 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:10 addons-262069 kubelet[1517]: E1217 00:11:10.352401    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930270351470475 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:10 addons-262069 kubelet[1517]: E1217 00:11:10.352449    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930270351470475 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:20 addons-262069 kubelet[1517]: E1217 00:11:20.355142    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930280354451370 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:20 addons-262069 kubelet[1517]: E1217 00:11:20.355179    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930280354451370 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:26 addons-262069 kubelet[1517]: I1217 00:11:26.005127    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:11:30 addons-262069 kubelet[1517]: E1217 00:11:30.359194    1517 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765930290358712964 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:30 addons-262069 kubelet[1517]: E1217 00:11:30.359237    1517 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765930290358712964 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 17 00:11:35 addons-262069 kubelet[1517]: I1217 00:11:35.004805    1517 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-225dx" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 00:11:37 addons-262069 kubelet[1517]: I1217 00:11:37.951334    1517 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnmq5\" (UniqueName: \"kubernetes.io/projected/5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4-kube-api-access-tnmq5\") pod \"hello-world-app-5d498dc89-98t54\" (UID: \"5d5f0ee3-96d7-4fd9-a8f1-c32bda978dc4\") " pod="default/hello-world-app-5d498dc89-98t54"
	
	
	==> storage-provisioner [dd1fdba2b689a084415531f7474442db44470fbe88cccf6cc431a5d63e3e0f4e] <==
	W1217 00:11:15.812685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:17.816616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:17.824859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:19.829438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:19.835224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:21.839136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:21.848226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:23.851787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:23.858319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:25.862346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:25.870452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:27.875288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:27.881737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:29.886662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:29.895153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:31.898973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:31.905643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:33.909343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:33.917192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:35.920322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:35.927383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:37.938397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:37.956145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:39.960620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:11:39.970229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-262069 -n addons-262069
helpers_test.go:270: (dbg) Run:  kubectl --context addons-262069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-262069 describe pod ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-262069 describe pod ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md: exit status 1 (63.898644ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nx5df" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d56md" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-262069 describe pod ingress-nginx-admission-create-nx5df ingress-nginx-admission-patch-d56md: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable ingress-dns --alsologtostderr -v=1: (1.346333159s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable ingress --alsologtostderr -v=1: (7.775954082s)
--- FAIL: TestAddons/parallel/Ingress (159.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (353.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:22:53.840470   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:23:34.801912   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:23:44.661560   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:24:56.724734   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:27:12.867278   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:27:40.571910   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-698418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5m51.921961093s)

                                                
                                                
-- stdout --
	* [functional-698418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "functional-698418" primary control-plane node in "functional-698418" cluster
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-698418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 5m51.922177706s for "functional-698418" cluster.
I1217 00:28:35.124406   17074 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-698418 -n functional-698418
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs -n 25: (1.281072537s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-069802 image ls --format yaml --alsologtostderr                                                                                      │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ ssh     │ functional-069802 ssh pgrep buildkitd                                                                                                           │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │                     │
	│ image   │ functional-069802 image build -t localhost/my-image:functional-069802 testdata/build --alsologtostderr                                          │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ image   │ functional-069802 image ls --format json --alsologtostderr                                                                                      │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ image   │ functional-069802 image ls --format table --alsologtostderr                                                                                     │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ image   │ functional-069802 image ls                                                                                                                      │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ delete  │ -p functional-069802                                                                                                                            │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ start   │ -p functional-698418 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:19 UTC │
	│ start   │ -p functional-698418 --alsologtostderr -v=8                                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add registry.k8s.io/pause:latest                                                                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add minikube-local-cache-test:functional-698418                                                                         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache delete minikube-local-cache-test:functional-698418                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl images                                                                                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │                     │
	│ cache   │ functional-698418 cache reload                                                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ kubectl │ functional-698418 kubectl -- --context functional-698418 get pods                                                                               │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ start   │ -p functional-698418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:22:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:22:43.256091   25752 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:22:43.256378   25752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:22:43.256382   25752 out.go:374] Setting ErrFile to fd 2...
	I1217 00:22:43.256386   25752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:22:43.256567   25752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:22:43.257000   25752 out.go:368] Setting JSON to false
	I1217 00:22:43.257933   25752 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3909,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:22:43.257985   25752 start.go:143] virtualization: kvm guest
	I1217 00:22:43.260384   25752 out.go:179] * [functional-698418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:22:43.262283   25752 notify.go:221] Checking for updates...
	I1217 00:22:43.262326   25752 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:22:43.263890   25752 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:22:43.265399   25752 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:22:43.267030   25752 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:22:43.268378   25752 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:22:43.269650   25752 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:22:43.271514   25752 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:22:43.271588   25752 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:22:43.303454   25752 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:22:43.304717   25752 start.go:309] selected driver: kvm2
	I1217 00:22:43.304723   25752 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:22:43.304808   25752 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:22:43.305698   25752 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:22:43.305713   25752 cni.go:84] Creating CNI manager for ""
	I1217 00:22:43.305759   25752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:22:43.305797   25752 start.go:353] cluster config:
	{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:22:43.305868   25752 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:22:43.307331   25752 out.go:179] * Starting "functional-698418" primary control-plane node in "functional-698418" cluster
	I1217 00:22:43.308408   25752 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:22:43.308431   25752 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:22:43.308436   25752 cache.go:65] Caching tarball of preloaded images
	I1217 00:22:43.308558   25752 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:22:43.308568   25752 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:22:43.308675   25752 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/config.json ...
	I1217 00:22:43.308862   25752 start.go:360] acquireMachinesLock for functional-698418: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 00:22:43.308902   25752 start.go:364] duration metric: took 27.719µs to acquireMachinesLock for "functional-698418"
	I1217 00:22:43.308915   25752 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:22:43.308920   25752 fix.go:54] fixHost starting: 
	I1217 00:22:43.310601   25752 fix.go:112] recreateIfNeeded on functional-698418: state=Running err=<nil>
	W1217 00:22:43.310621   25752 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:22:43.312177   25752 out.go:252] * Updating the running kvm2 "functional-698418" VM ...
	I1217 00:22:43.312201   25752 machine.go:94] provisionDockerMachine start ...
	I1217 00:22:43.315074   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.315498   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.315517   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.315705   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.316011   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.316037   25752 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:22:43.426859   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-698418
	
	I1217 00:22:43.426876   25752 buildroot.go:166] provisioning hostname "functional-698418"
	I1217 00:22:43.429883   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.430307   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.430321   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.430533   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.430721   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.430726   25752 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-698418 && echo "functional-698418" | sudo tee /etc/hostname
	I1217 00:22:43.558576   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-698418
	
	I1217 00:22:43.561348   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.561764   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.561805   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.562002   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.562287   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.562305   25752 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-698418' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-698418/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-698418' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:22:43.673350   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:22:43.673366   25752 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 00:22:43.673419   25752 buildroot.go:174] setting up certificates
	I1217 00:22:43.673428   25752 provision.go:84] configureAuth start
	I1217 00:22:43.676037   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.676409   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.676426   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.678717   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.679151   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.679162   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.679273   25752 provision.go:143] copyHostCerts
	I1217 00:22:43.679318   25752 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem, removing ...
	I1217 00:22:43.679328   25752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem
	I1217 00:22:43.679411   25752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 00:22:43.679511   25752 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem, removing ...
	I1217 00:22:43.679516   25752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem
	I1217 00:22:43.679542   25752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 00:22:43.679593   25752 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem, removing ...
	I1217 00:22:43.679596   25752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem
	I1217 00:22:43.679615   25752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 00:22:43.679667   25752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.functional-698418 san=[127.0.0.1 192.168.39.109 functional-698418 localhost minikube]
	I1217 00:22:43.785229   25752 provision.go:177] copyRemoteCerts
	I1217 00:22:43.785280   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:22:43.788142   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.788476   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.788519   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.788678   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:43.881039   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:22:43.917787   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:22:43.951793   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:22:43.982985   25752 provision.go:87] duration metric: took 309.545109ms to configureAuth
	I1217 00:22:43.983000   25752 buildroot.go:189] setting minikube options for container-runtime
	I1217 00:22:43.983235   25752 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:22:43.985956   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.986329   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.986346   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.986491   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.986663   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.986671   25752 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:22:49.647466   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:22:49.647480   25752 machine.go:97] duration metric: took 6.335273064s to provisionDockerMachine
	I1217 00:22:49.647491   25752 start.go:293] postStartSetup for "functional-698418" (driver="kvm2")
	I1217 00:22:49.647500   25752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:22:49.647558   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:22:49.650596   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.651141   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.651184   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.651394   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:49.740223   25752 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:22:49.745373   25752 info.go:137] Remote host: Buildroot 2025.02
	I1217 00:22:49.745389   25752 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 00:22:49.745456   25752 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 00:22:49.745570   25752 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem -> 170742.pem in /etc/ssl/certs
	I1217 00:22:49.745663   25752 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/test/nested/copy/17074/hosts -> hosts in /etc/test/nested/copy/17074
	I1217 00:22:49.745715   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/17074
	I1217 00:22:49.759499   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /etc/ssl/certs/170742.pem (1708 bytes)
	I1217 00:22:49.789688   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/test/nested/copy/17074/hosts --> /etc/test/nested/copy/17074/hosts (40 bytes)
	I1217 00:22:49.820153   25752 start.go:296] duration metric: took 172.648935ms for postStartSetup
	I1217 00:22:49.820181   25752 fix.go:56] duration metric: took 6.511261301s for fixHost
	I1217 00:22:49.822941   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.823489   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.823506   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.823692   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:49.823865   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:49.823869   25752 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 00:22:49.931410   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765930969.924007537
	
	I1217 00:22:49.931424   25752 fix.go:216] guest clock: 1765930969.924007537
	I1217 00:22:49.931445   25752 fix.go:229] Guest: 2025-12-17 00:22:49.924007537 +0000 UTC Remote: 2025-12-17 00:22:49.820183058 +0000 UTC m=+6.612228707 (delta=103.824479ms)
	I1217 00:22:49.931465   25752 fix.go:200] guest clock delta is within tolerance: 103.824479ms
	I1217 00:22:49.931472   25752 start.go:83] releasing machines lock for "functional-698418", held for 6.622562915s
	I1217 00:22:49.934498   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.934919   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.934937   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.935429   25752 ssh_runner.go:195] Run: cat /version.json
	I1217 00:22:49.935493   25752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:22:49.938629   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.938866   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.939128   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.939169   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.939380   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:49.939393   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.939416   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.939609   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:50.075698   25752 ssh_runner.go:195] Run: systemctl --version
	I1217 00:22:50.137545   25752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:22:50.357675   25752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:22:50.371801   25752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:22:50.371852   25752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:22:50.402988   25752 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:22:50.403000   25752 start.go:496] detecting cgroup driver to use...
	I1217 00:22:50.403092   25752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:22:50.454308   25752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:22:50.494252   25752 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:22:50.494329   25752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:22:50.534696   25752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:22:50.581178   25752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:22:50.924784   25752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:22:51.245244   25752 docker.go:234] disabling docker service ...
	I1217 00:22:51.245309   25752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:22:51.297282   25752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:22:51.321070   25752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:22:51.564598   25752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:22:51.749522   25752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:22:51.766068   25752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:22:51.790246   25752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:22:51.790303   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.803229   25752 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:22:51.803281   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.817058   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.830793   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.843684   25752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:22:51.857209   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.870832   25752 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.884308   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.897475   25752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:22:51.908565   25752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:22:51.920607   25752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:22:52.094367   25752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:24:22.475352   25752 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.380957716s)
	I1217 00:24:22.475375   25752 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:24:22.475446   25752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:24:22.481753   25752 start.go:564] Will wait 60s for crictl version
	I1217 00:24:22.481801   25752 ssh_runner.go:195] Run: which crictl
	I1217 00:24:22.486190   25752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 00:24:22.519965   25752 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 00:24:22.520049   25752 ssh_runner.go:195] Run: crio --version
	I1217 00:24:22.551131   25752 ssh_runner.go:195] Run: crio --version
	I1217 00:24:22.583013   25752 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1217 00:24:22.587591   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:24:22.587987   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:24:22.588003   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:24:22.588201   25752 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 00:24:22.594969   25752 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:24:22.596370   25752 kubeadm.go:884] updating cluster {Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:24:22.596525   25752 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:24:22.596654   25752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:24:22.640962   25752 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:24:22.640973   25752 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:24:22.641034   25752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:24:22.675530   25752 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:24:22.675543   25752 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:24:22.675551   25752 kubeadm.go:935] updating node { 192.168.39.109 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:24:22.675661   25752 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-698418 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:24:22.675737   25752 ssh_runner.go:195] Run: crio config
	I1217 00:24:22.724408   25752 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:24:22.724441   25752 cni.go:84] Creating CNI manager for ""
	I1217 00:24:22.724454   25752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:24:22.724463   25752 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:24:22.724491   25752 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-698418 NodeName:functional-698418 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:24:22.724650   25752 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-698418"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:24:22.724738   25752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:24:22.737193   25752 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:24:22.737267   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:24:22.750915   25752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1217 00:24:22.774579   25752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:24:22.795236   25752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1217 00:24:22.817406   25752 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I1217 00:24:22.821779   25752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:22.996340   25752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:24:23.016076   25752 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418 for IP: 192.168.39.109
	I1217 00:24:23.016088   25752 certs.go:195] generating shared ca certs ...
	I1217 00:24:23.016105   25752 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:23.016290   25752 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 00:24:23.016327   25752 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 00:24:23.016333   25752 certs.go:257] generating profile certs ...
	I1217 00:24:23.016435   25752 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.key
	I1217 00:24:23.016506   25752 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/apiserver.key.513eab2d
	I1217 00:24:23.016559   25752 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/proxy-client.key
	I1217 00:24:23.016677   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 00:24:23.016701   25752 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 00:24:23.016706   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:24:23.016729   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:24:23.016747   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:24:23.016775   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 00:24:23.016817   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 00:24:23.017549   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:24:23.048306   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:24:23.078313   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:24:23.108885   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:24:23.139263   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:24:23.168354   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:24:23.197399   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:24:23.228262   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:24:23.259506   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:24:23.291110   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 00:24:23.322841   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 00:24:23.354523   25752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:24:23.376349   25752 ssh_runner.go:195] Run: openssl version
	I1217 00:24:23.383816   25752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.396275   25752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:24:23.408513   25752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.414245   25752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.414293   25752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.422417   25752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:24:23.435703   25752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.448163   25752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 00:24:23.460668   25752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.466145   25752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.466201   25752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.473914   25752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:24:23.486265   25752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.498499   25752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 00:24:23.510590   25752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.516478   25752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.516527   25752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.524260   25752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:24:23.536459   25752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:24:23.542296   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:24:23.549635   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:24:23.556801   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:24:23.564140   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:24:23.571538   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:24:23.578672   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:24:23.585769   25752 kubeadm.go:401] StartCluster: {Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:24:23.585858   25752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:24:23.585912   25752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:24:23.620895   25752 cri.go:89] found id: "fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1"
	I1217 00:24:23.620906   25752 cri.go:89] found id: "25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446"
	I1217 00:24:23.620910   25752 cri.go:89] found id: "6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab"
	I1217 00:24:23.620913   25752 cri.go:89] found id: "1a94b2a880eb458a6a0cc8ace2efd1df4bc6d4ddbcf37229a45a6992bc612bc3"
	I1217 00:24:23.620917   25752 cri.go:89] found id: "4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0"
	I1217 00:24:23.620921   25752 cri.go:89] found id: "95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b"
	I1217 00:24:23.620924   25752 cri.go:89] found id: "089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1"
	I1217 00:24:23.620927   25752 cri.go:89] found id: "7ce2d94b0a8558c089c3ffe1724c562dd1ef86bafb6973b68f04b587a0531bb2"
	I1217 00:24:23.620931   25752 cri.go:89] found id: "f6f055f0d6667bff85c0820d211944d7a5377cfa4e4de452b3ab190662b16761"
	I1217 00:24:23.620940   25752 cri.go:89] found id: ""
	I1217 00:24:23.620994   25752 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
helpers_test.go:270: (dbg) Run:  kubectl --context functional-698418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (353.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (1.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-698418 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:848: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:False} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.109 PodIP:192.168.39.109 StartTime:2025-12-17 00:24:25 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:<nil> Terminated:0xc00047a0e0} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:2 Image:registry.k8s.io/kube-scheduler:v1.35.0-beta.0 ImageID:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46 ContainerID:cri-o://95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b}]}
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-698418 -n functional-698418
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs -n 25: (1.268092957s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-069802 image ls --format yaml --alsologtostderr                                                                                      │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ ssh     │ functional-069802 ssh pgrep buildkitd                                                                                                           │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │                     │
	│ image   │ functional-069802 image build -t localhost/my-image:functional-069802 testdata/build --alsologtostderr                                          │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ image   │ functional-069802 image ls --format json --alsologtostderr                                                                                      │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ image   │ functional-069802 image ls --format table --alsologtostderr                                                                                     │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ image   │ functional-069802 image ls                                                                                                                      │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:17 UTC │ 17 Dec 25 00:17 UTC │
	│ delete  │ -p functional-069802                                                                                                                            │ functional-069802 │ jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ start   │ -p functional-698418 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:19 UTC │
	│ start   │ -p functional-698418 --alsologtostderr -v=8                                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add registry.k8s.io/pause:3.1                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add registry.k8s.io/pause:3.3                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add registry.k8s.io/pause:latest                                                                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache add minikube-local-cache-test:functional-698418                                                                         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ functional-698418 cache delete minikube-local-cache-test:functional-698418                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ list                                                                                                                                            │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl images                                                                                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │                     │
	│ cache   │ functional-698418 cache reload                                                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ ssh     │ functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ kubectl │ functional-698418 kubectl -- --context functional-698418 get pods                                                                               │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │ 17 Dec 25 00:22 UTC │
	│ start   │ -p functional-698418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:22 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:22:43
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:22:43.256091   25752 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:22:43.256378   25752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:22:43.256382   25752 out.go:374] Setting ErrFile to fd 2...
	I1217 00:22:43.256386   25752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:22:43.256567   25752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:22:43.257000   25752 out.go:368] Setting JSON to false
	I1217 00:22:43.257933   25752 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3909,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:22:43.257985   25752 start.go:143] virtualization: kvm guest
	I1217 00:22:43.260384   25752 out.go:179] * [functional-698418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:22:43.262283   25752 notify.go:221] Checking for updates...
	I1217 00:22:43.262326   25752 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:22:43.263890   25752 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:22:43.265399   25752 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:22:43.267030   25752 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:22:43.268378   25752 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:22:43.269650   25752 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:22:43.271514   25752 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:22:43.271588   25752 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:22:43.303454   25752 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:22:43.304717   25752 start.go:309] selected driver: kvm2
	I1217 00:22:43.304723   25752 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:22:43.304808   25752 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:22:43.305698   25752 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:22:43.305713   25752 cni.go:84] Creating CNI manager for ""
	I1217 00:22:43.305759   25752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:22:43.305797   25752 start.go:353] cluster config:
	{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:22:43.305868   25752 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:22:43.307331   25752 out.go:179] * Starting "functional-698418" primary control-plane node in "functional-698418" cluster
	I1217 00:22:43.308408   25752 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:22:43.308431   25752 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:22:43.308436   25752 cache.go:65] Caching tarball of preloaded images
	I1217 00:22:43.308558   25752 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 00:22:43.308568   25752 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1217 00:22:43.308675   25752 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/config.json ...
	I1217 00:22:43.308862   25752 start.go:360] acquireMachinesLock for functional-698418: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 00:22:43.308902   25752 start.go:364] duration metric: took 27.719µs to acquireMachinesLock for "functional-698418"
	I1217 00:22:43.308915   25752 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:22:43.308920   25752 fix.go:54] fixHost starting: 
	I1217 00:22:43.310601   25752 fix.go:112] recreateIfNeeded on functional-698418: state=Running err=<nil>
	W1217 00:22:43.310621   25752 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:22:43.312177   25752 out.go:252] * Updating the running kvm2 "functional-698418" VM ...
	I1217 00:22:43.312201   25752 machine.go:94] provisionDockerMachine start ...
	I1217 00:22:43.315074   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.315498   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.315517   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.315705   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.316011   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.316037   25752 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:22:43.426859   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-698418
	
	I1217 00:22:43.426876   25752 buildroot.go:166] provisioning hostname "functional-698418"
	I1217 00:22:43.429883   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.430307   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.430321   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.430533   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.430721   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.430726   25752 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-698418 && echo "functional-698418" | sudo tee /etc/hostname
	I1217 00:22:43.558576   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-698418
	
	I1217 00:22:43.561348   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.561764   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.561805   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.562002   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.562287   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.562305   25752 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-698418' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-698418/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-698418' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:22:43.673350   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:22:43.673366   25752 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 00:22:43.673419   25752 buildroot.go:174] setting up certificates
	I1217 00:22:43.673428   25752 provision.go:84] configureAuth start
	I1217 00:22:43.676037   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.676409   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.676426   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.678717   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.679151   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.679162   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.679273   25752 provision.go:143] copyHostCerts
	I1217 00:22:43.679318   25752 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem, removing ...
	I1217 00:22:43.679328   25752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem
	I1217 00:22:43.679411   25752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 00:22:43.679511   25752 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem, removing ...
	I1217 00:22:43.679516   25752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem
	I1217 00:22:43.679542   25752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 00:22:43.679593   25752 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem, removing ...
	I1217 00:22:43.679596   25752 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem
	I1217 00:22:43.679615   25752 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 00:22:43.679667   25752 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.functional-698418 san=[127.0.0.1 192.168.39.109 functional-698418 localhost minikube]
	I1217 00:22:43.785229   25752 provision.go:177] copyRemoteCerts
	I1217 00:22:43.785280   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:22:43.788142   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.788476   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.788519   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.788678   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:43.881039   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:22:43.917787   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:22:43.951793   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:22:43.982985   25752 provision.go:87] duration metric: took 309.545109ms to configureAuth
	I1217 00:22:43.983000   25752 buildroot.go:189] setting minikube options for container-runtime
	I1217 00:22:43.983235   25752 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:22:43.985956   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.986329   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:43.986346   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:43.986491   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:43.986663   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:43.986671   25752 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 00:22:49.647466   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 00:22:49.647480   25752 machine.go:97] duration metric: took 6.335273064s to provisionDockerMachine
	I1217 00:22:49.647491   25752 start.go:293] postStartSetup for "functional-698418" (driver="kvm2")
	I1217 00:22:49.647500   25752 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:22:49.647558   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:22:49.650596   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.651141   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.651184   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.651394   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:49.740223   25752 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:22:49.745373   25752 info.go:137] Remote host: Buildroot 2025.02
	I1217 00:22:49.745389   25752 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 00:22:49.745456   25752 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 00:22:49.745570   25752 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem -> 170742.pem in /etc/ssl/certs
	I1217 00:22:49.745663   25752 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/test/nested/copy/17074/hosts -> hosts in /etc/test/nested/copy/17074
	I1217 00:22:49.745715   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/17074
	I1217 00:22:49.759499   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /etc/ssl/certs/170742.pem (1708 bytes)
	I1217 00:22:49.789688   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/test/nested/copy/17074/hosts --> /etc/test/nested/copy/17074/hosts (40 bytes)
	I1217 00:22:49.820153   25752 start.go:296] duration metric: took 172.648935ms for postStartSetup
	I1217 00:22:49.820181   25752 fix.go:56] duration metric: took 6.511261301s for fixHost
	I1217 00:22:49.822941   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.823489   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.823506   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.823692   25752 main.go:143] libmachine: Using SSH client type: native
	I1217 00:22:49.823865   25752 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.109 22 <nil> <nil>}
	I1217 00:22:49.823869   25752 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 00:22:49.931410   25752 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765930969.924007537
	
	I1217 00:22:49.931424   25752 fix.go:216] guest clock: 1765930969.924007537
	I1217 00:22:49.931445   25752 fix.go:229] Guest: 2025-12-17 00:22:49.924007537 +0000 UTC Remote: 2025-12-17 00:22:49.820183058 +0000 UTC m=+6.612228707 (delta=103.824479ms)
	I1217 00:22:49.931465   25752 fix.go:200] guest clock delta is within tolerance: 103.824479ms
	I1217 00:22:49.931472   25752 start.go:83] releasing machines lock for "functional-698418", held for 6.622562915s
	I1217 00:22:49.934498   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.934919   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.934937   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.935429   25752 ssh_runner.go:195] Run: cat /version.json
	I1217 00:22:49.935493   25752 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 00:22:49.938629   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.938866   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.939128   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.939169   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.939380   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:49.939393   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:22:49.939416   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:22:49.939609   25752 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
	I1217 00:22:50.075698   25752 ssh_runner.go:195] Run: systemctl --version
	I1217 00:22:50.137545   25752 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 00:22:50.357675   25752 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:22:50.371801   25752 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:22:50.371852   25752 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:22:50.402988   25752 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:22:50.403000   25752 start.go:496] detecting cgroup driver to use...
	I1217 00:22:50.403092   25752 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:22:50.454308   25752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:22:50.494252   25752 docker.go:218] disabling cri-docker service (if available) ...
	I1217 00:22:50.494329   25752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 00:22:50.534696   25752 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 00:22:50.581178   25752 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 00:22:50.924784   25752 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 00:22:51.245244   25752 docker.go:234] disabling docker service ...
	I1217 00:22:51.245309   25752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 00:22:51.297282   25752 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 00:22:51.321070   25752 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 00:22:51.564598   25752 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 00:22:51.749522   25752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:22:51.766068   25752 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:22:51.790246   25752 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 00:22:51.790303   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.803229   25752 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 00:22:51.803281   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.817058   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.830793   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.843684   25752 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:22:51.857209   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.870832   25752 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.884308   25752 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 00:22:51.897475   25752 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:22:51.908565   25752 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:22:51.920607   25752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:22:52.094367   25752 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 00:24:22.475352   25752 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.380957716s)
	I1217 00:24:22.475375   25752 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 00:24:22.475446   25752 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 00:24:22.481753   25752 start.go:564] Will wait 60s for crictl version
	I1217 00:24:22.481801   25752 ssh_runner.go:195] Run: which crictl
	I1217 00:24:22.486190   25752 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 00:24:22.519965   25752 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 00:24:22.520049   25752 ssh_runner.go:195] Run: crio --version
	I1217 00:24:22.551131   25752 ssh_runner.go:195] Run: crio --version
	I1217 00:24:22.583013   25752 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1217 00:24:22.587591   25752 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:24:22.587987   25752 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
	I1217 00:24:22.588003   25752 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
	I1217 00:24:22.588201   25752 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 00:24:22.594969   25752 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:24:22.596370   25752 kubeadm.go:884] updating cluster {Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:24:22.596525   25752 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 00:24:22.596654   25752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:24:22.640962   25752 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:24:22.640973   25752 crio.go:433] Images already preloaded, skipping extraction
	I1217 00:24:22.641034   25752 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 00:24:22.675530   25752 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 00:24:22.675543   25752 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:24:22.675551   25752 kubeadm.go:935] updating node { 192.168.39.109 8441 v1.35.0-beta.0 crio true true} ...
	I1217 00:24:22.675661   25752 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-698418 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.109
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:24:22.675737   25752 ssh_runner.go:195] Run: crio config
	I1217 00:24:22.724408   25752 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:24:22.724441   25752 cni.go:84] Creating CNI manager for ""
	I1217 00:24:22.724454   25752 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:24:22.724463   25752 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:24:22.724491   25752 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.109 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-698418 NodeName:functional-698418 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.109"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.109 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false Kubelet
ConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:24:22.724650   25752 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.109
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-698418"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.109"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.109"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:24:22.724738   25752 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:24:22.737193   25752 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:24:22.737267   25752 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:24:22.750915   25752 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1217 00:24:22.774579   25752 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:24:22.795236   25752 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2077 bytes)
	I1217 00:24:22.817406   25752 ssh_runner.go:195] Run: grep 192.168.39.109	control-plane.minikube.internal$ /etc/hosts
	I1217 00:24:22.821779   25752 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:22.996340   25752 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:24:23.016076   25752 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418 for IP: 192.168.39.109
	I1217 00:24:23.016088   25752 certs.go:195] generating shared ca certs ...
	I1217 00:24:23.016105   25752 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:23.016290   25752 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 00:24:23.016327   25752 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 00:24:23.016333   25752 certs.go:257] generating profile certs ...
	I1217 00:24:23.016435   25752 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.key
	I1217 00:24:23.016506   25752 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/apiserver.key.513eab2d
	I1217 00:24:23.016559   25752 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/proxy-client.key
	I1217 00:24:23.016677   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 00:24:23.016701   25752 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 00:24:23.016706   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 00:24:23.016729   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 00:24:23.016747   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 00:24:23.016775   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 00:24:23.016817   25752 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 00:24:23.017549   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:24:23.048306   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:24:23.078313   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:24:23.108885   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:24:23.139263   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:24:23.168354   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:24:23.197399   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:24:23.228262   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 00:24:23.259506   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:24:23.291110   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 00:24:23.322841   25752 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 00:24:23.354523   25752 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:24:23.376349   25752 ssh_runner.go:195] Run: openssl version
	I1217 00:24:23.383816   25752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.396275   25752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:24:23.408513   25752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.414245   25752 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.414293   25752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:23.422417   25752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:24:23.435703   25752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.448163   25752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 00:24:23.460668   25752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.466145   25752 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.466201   25752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 00:24:23.473914   25752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:24:23.486265   25752 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.498499   25752 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 00:24:23.510590   25752 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.516478   25752 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.516527   25752 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 00:24:23.524260   25752 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:24:23.536459   25752 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:24:23.542296   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:24:23.549635   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:24:23.556801   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:24:23.564140   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:24:23.571538   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:24:23.578672   25752 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:24:23.585769   25752 kubeadm.go:401] StartCluster: {Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountS
tring: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:24:23.585858   25752 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 00:24:23.585912   25752 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 00:24:23.620895   25752 cri.go:89] found id: "fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1"
	I1217 00:24:23.620906   25752 cri.go:89] found id: "25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446"
	I1217 00:24:23.620910   25752 cri.go:89] found id: "6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab"
	I1217 00:24:23.620913   25752 cri.go:89] found id: "1a94b2a880eb458a6a0cc8ace2efd1df4bc6d4ddbcf37229a45a6992bc612bc3"
	I1217 00:24:23.620917   25752 cri.go:89] found id: "4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0"
	I1217 00:24:23.620921   25752 cri.go:89] found id: "95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b"
	I1217 00:24:23.620924   25752 cri.go:89] found id: "089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1"
	I1217 00:24:23.620927   25752 cri.go:89] found id: "7ce2d94b0a8558c089c3ffe1724c562dd1ef86bafb6973b68f04b587a0531bb2"
	I1217 00:24:23.620931   25752 cri.go:89] found id: "f6f055f0d6667bff85c0820d211944d7a5377cfa4e4de452b3ab190662b16761"
	I1217 00:24:23.620940   25752 cri.go:89] found id: ""
	I1217 00:24:23.620994   25752 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
helpers_test.go:270: (dbg) Run:  kubectl --context functional-698418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (1.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-698418 --alsologtostderr -v=1]
E1217 00:37:12.864293   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:38:35.934208   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:38:44.660859   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-698418 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-698418 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-698418 --alsologtostderr -v=1] stderr:
I1217 00:34:58.125710   29460 out.go:360] Setting OutFile to fd 1 ...
I1217 00:34:58.125819   29460 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:34:58.125827   29460 out.go:374] Setting ErrFile to fd 2...
I1217 00:34:58.125831   29460 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:34:58.126054   29460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:34:58.126284   29460 mustload.go:66] Loading cluster: functional-698418
I1217 00:34:58.126622   29460 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:34:58.128916   29460 host.go:66] Checking if "functional-698418" exists ...
I1217 00:34:58.129175   29460 api_server.go:166] Checking apiserver status ...
I1217 00:34:58.129222   29460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 00:34:58.131823   29460 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:34:58.132292   29460 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:34:58.132318   29460 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:34:58.132512   29460 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:34:58.226654   29460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/6889/cgroup
W1217 00:34:58.238494   29460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/6889/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 00:34:58.238561   29460 ssh_runner.go:195] Run: ls
I1217 00:34:58.243770   29460 api_server.go:253] Checking apiserver healthz at https://192.168.39.109:8441/healthz ...
I1217 00:34:58.249422   29460 api_server.go:279] https://192.168.39.109:8441/healthz returned 200:
ok
W1217 00:34:58.249496   29460 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1217 00:34:58.249710   29460 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:34:58.249738   29460 addons.go:70] Setting dashboard=true in profile "functional-698418"
I1217 00:34:58.249746   29460 addons.go:239] Setting addon dashboard=true in "functional-698418"
I1217 00:34:58.249778   29460 host.go:66] Checking if "functional-698418" exists ...
I1217 00:34:58.253534   29460 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1217 00:34:58.254988   29460 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1217 00:34:58.256491   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1217 00:34:58.256509   29460 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1217 00:34:58.258957   29460 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:34:58.259392   29460 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:34:58.259417   29460 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:34:58.259580   29460 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:34:58.356082   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1217 00:34:58.356111   29460 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1217 00:34:58.379714   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1217 00:34:58.379737   29460 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1217 00:34:58.403077   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1217 00:34:58.403100   29460 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1217 00:34:58.426903   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1217 00:34:58.426927   29460 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1217 00:34:58.450312   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1217 00:34:58.450333   29460 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1217 00:34:58.471285   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1217 00:34:58.471315   29460 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1217 00:34:58.492846   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1217 00:34:58.492869   29460 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1217 00:34:58.513915   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1217 00:34:58.513945   29460 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1217 00:34:58.535312   29460 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1217 00:34:58.535339   29460 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1217 00:34:58.557089   29460 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1217 00:34:59.240272   29460 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-698418 addons enable metrics-server

                                                
                                                
I1217 00:34:59.244267   29460 addons.go:202] Writing out "functional-698418" config to set dashboard=true...
W1217 00:34:59.244610   29460 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1217 00:34:59.245626   29460 kapi.go:59] client config for functional-698418: &rest.Config{Host:"https://192.168.39.109:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1217 00:34:59.246351   29460 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1217 00:34:59.246370   29460 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1217 00:34:59.246374   29460 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1217 00:34:59.246378   29460 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1217 00:34:59.246382   29460 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1217 00:34:59.255832   29460 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  13cbbab1-7fcd-4ceb-85c1-36405d9b226b 1337 0 2025-12-17 00:34:59 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-17 00:34:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.104.88.77,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.104.88.77],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1217 00:34:59.255967   29460 out.go:285] * Launching proxy ...
* Launching proxy ...
I1217 00:34:59.256039   29460 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-698418 proxy --port 36195]
I1217 00:34:59.256472   29460 dashboard.go:159] Waiting for kubectl to output host:port ...
I1217 00:34:59.299113   29460 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1217 00:34:59.299153   29460 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1217 00:34:59.308587   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[73db5838-8f48-449e-882b-b9d60bc59f77] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc00143c940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ccb40 TLS:<nil>}
I1217 00:34:59.308684   29460 retry.go:31] will retry after 135.509µs: Temporary Error: unexpected response code: 503
I1217 00:34:59.312431   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b1a2f6b7-4635-4563-9977-8c7c7f065055] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001657500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997400 TLS:<nil>}
I1217 00:34:59.312481   29460 retry.go:31] will retry after 150.734µs: Temporary Error: unexpected response code: 503
I1217 00:34:59.316039   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eb5b1779-93f3-4178-a1a4-05bcb82a071f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc00143ca40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002923c0 TLS:<nil>}
I1217 00:34:59.316100   29460 retry.go:31] will retry after 208.779µs: Temporary Error: unexpected response code: 503
I1217 00:34:59.319735   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[96a72920-481c-49c0-9dd8-7b3724df7f80] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001657600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997540 TLS:<nil>}
I1217 00:34:59.319776   29460 retry.go:31] will retry after 218.361µs: Temporary Error: unexpected response code: 503
I1217 00:34:59.323532   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bc7666c-715e-4d3e-9aab-425eb4288632] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001544740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292640 TLS:<nil>}
I1217 00:34:59.323592   29460 retry.go:31] will retry after 411.043µs: Temporary Error: unexpected response code: 503
I1217 00:34:59.327174   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b558db8d-9baa-4f27-9292-a028e0bf17c9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001657700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ccc80 TLS:<nil>}
I1217 00:34:59.327236   29460 retry.go:31] will retry after 531.122µs: Temporary Error: unexpected response code: 503
I1217 00:34:59.331371   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9a16063-48f1-4973-ba70-c320c51816a2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc0016577c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292780 TLS:<nil>}
I1217 00:34:59.331403   29460 retry.go:31] will retry after 1.506336ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.335747   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a78a222a-3357-4caf-9d51-32cb420a7784] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc00143cb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292a00 TLS:<nil>}
I1217 00:34:59.335793   29460 retry.go:31] will retry after 2.518439ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.341244   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cc34bf8b-7761-463d-9524-4f4940a8f1e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001544840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997680 TLS:<nil>}
I1217 00:34:59.341282   29460 retry.go:31] will retry after 2.422624ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.346963   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[581bcf49-6eb7-4110-9616-65d92ea62e03] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc0016578c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ccdc0 TLS:<nil>}
I1217 00:34:59.346999   29460 retry.go:31] will retry after 2.414124ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.353097   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e72034a2-03b9-4e0d-b867-5c49648d375d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001544940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292b40 TLS:<nil>}
I1217 00:34:59.353145   29460 retry.go:31] will retry after 3.117266ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.359750   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7d2f4ce8-c277-4a6c-bad4-1bda9ea30103] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc0016579c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ccf00 TLS:<nil>}
I1217 00:34:59.359799   29460 retry.go:31] will retry after 11.892757ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.375032   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3c9cf640-35d0-40c3-b2b9-8199737c5a70] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc00143cc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292f00 TLS:<nil>}
I1217 00:34:59.375084   29460 retry.go:31] will retry after 16.046173ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.394289   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84f856f8-155a-4e81-a874-6f2800e48c9b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001657a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0009977c0 TLS:<nil>}
I1217 00:34:59.394338   29460 retry.go:31] will retry after 13.32129ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.413403   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[61aae6bb-6041-447f-baec-22256dce319c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001544a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293040 TLS:<nil>}
I1217 00:34:59.413481   29460 retry.go:31] will retry after 41.103028ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.458873   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ce67906f-7a35-4555-bffe-78d305cfc924] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001657b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd180 TLS:<nil>}
I1217 00:34:59.458938   29460 retry.go:31] will retry after 26.611585ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.489449   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad8bf67f-5289-49c2-8e0a-bce9ebe6b79f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc00143cd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293180 TLS:<nil>}
I1217 00:34:59.489519   29460 retry.go:31] will retry after 69.183671ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.562872   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[58d3a540-9cde-4761-a2d8-538b92aa0332] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001544b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997900 TLS:<nil>}
I1217 00:34:59.562954   29460 retry.go:31] will retry after 49.92796ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.616574   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf237245-ab2e-4aaa-b65e-5e123cdb3229] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc00143ce80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd2c0 TLS:<nil>}
I1217 00:34:59.616626   29460 retry.go:31] will retry after 208.885297ms: Temporary Error: unexpected response code: 503
I1217 00:34:59.829559   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a713c3df-ac39-4dc2-994d-3f9da423ad25] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:34:59 GMT]] Body:0xc001544c40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997a40 TLS:<nil>}
I1217 00:34:59.829613   29460 retry.go:31] will retry after 215.838528ms: Temporary Error: unexpected response code: 503
I1217 00:35:00.050111   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3d50645a-a8d3-461f-9947-8d760a6dfb92] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:00 GMT]] Body:0xc001544d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd400 TLS:<nil>}
I1217 00:35:00.050181   29460 retry.go:31] will retry after 289.918484ms: Temporary Error: unexpected response code: 503
I1217 00:35:00.344056   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9bcde65-8d8c-49bd-9ac0-67b33afd078f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:00 GMT]] Body:0xc00143cf80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd540 TLS:<nil>}
I1217 00:35:00.344144   29460 retry.go:31] will retry after 385.657741ms: Temporary Error: unexpected response code: 503
I1217 00:35:00.734383   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[eca8cde1-a3f8-4a1e-aabf-6ea138e3e2a4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:00 GMT]] Body:0xc001657cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997b80 TLS:<nil>}
I1217 00:35:00.734448   29460 retry.go:31] will retry after 416.795236ms: Temporary Error: unexpected response code: 503
I1217 00:35:01.156368   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[62626327-5675-47d1-ad90-dcaa8478cd7a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:01 GMT]] Body:0xc001657d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002932c0 TLS:<nil>}
I1217 00:35:01.156421   29460 retry.go:31] will retry after 1.623498832s: Temporary Error: unexpected response code: 503
I1217 00:35:02.784419   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddec8750-d64e-42ba-b7e9-3b24fb17f8f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:02 GMT]] Body:0xc001544e00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293400 TLS:<nil>}
I1217 00:35:02.784477   29460 retry.go:31] will retry after 2.039129988s: Temporary Error: unexpected response code: 503
I1217 00:35:04.828281   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8323fb01-c217-43da-82a3-e5eec861d06b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:04 GMT]] Body:0xc001544ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd680 TLS:<nil>}
I1217 00:35:04.828357   29460 retry.go:31] will retry after 1.836036083s: Temporary Error: unexpected response code: 503
I1217 00:35:06.669996   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[765b253a-0ba8-4a16-b5cb-e40d5beaedb5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:06 GMT]] Body:0xc001657ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cd900 TLS:<nil>}
I1217 00:35:06.670092   29460 retry.go:31] will retry after 4.951867727s: Temporary Error: unexpected response code: 503
I1217 00:35:11.626915   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[867bdf14-695a-4532-bf8a-8b58526680c8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:11 GMT]] Body:0xc001544fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997cc0 TLS:<nil>}
I1217 00:35:11.626988   29460 retry.go:31] will retry after 4.265756663s: Temporary Error: unexpected response code: 503
I1217 00:35:15.898963   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[af945c72-df5e-4a76-9c35-85d1665f2b30] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:15 GMT]] Body:0xc00143d140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cda40 TLS:<nil>}
I1217 00:35:15.899060   29460 retry.go:31] will retry after 8.396197576s: Temporary Error: unexpected response code: 503
I1217 00:35:24.299720   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[67c06d0c-4b8a-4bf3-95b8-bddaf950a6a9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:24 GMT]] Body:0xc0016fa000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cdb80 TLS:<nil>}
I1217 00:35:24.299783   29460 retry.go:31] will retry after 7.537999453s: Temporary Error: unexpected response code: 503
I1217 00:35:31.843635   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36895156-7e29-4ed5-9c0b-f0746a02a61f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:31 GMT]] Body:0xc0016fa100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293540 TLS:<nil>}
I1217 00:35:31.843698   29460 retry.go:31] will retry after 20.946225851s: Temporary Error: unexpected response code: 503
I1217 00:35:52.796293   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8932c00c-48de-4bb5-bdc1-40f12580332b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:35:52 GMT]] Body:0xc001545100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000997e00 TLS:<nil>}
I1217 00:35:52.796357   29460 retry.go:31] will retry after 19.50705987s: Temporary Error: unexpected response code: 503
I1217 00:36:12.308449   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2be042f7-d192-48b6-aa8a-5f321f3c6d88] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:36:12 GMT]] Body:0xc0016fa180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cdcc0 TLS:<nil>}
I1217 00:36:12.308513   29460 retry.go:31] will retry after 28.909666111s: Temporary Error: unexpected response code: 503
I1217 00:36:41.223222   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2af40198-67f4-47ef-b396-9ce56efe69cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:36:41 GMT]] Body:0xc001545200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293680 TLS:<nil>}
I1217 00:36:41.223304   29460 retry.go:31] will retry after 52.409189347s: Temporary Error: unexpected response code: 503
I1217 00:37:33.637207   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cfdf4e69-ea9a-4721-9602-587f190ebaba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:37:33 GMT]] Body:0xc001544080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc3c0 TLS:<nil>}
I1217 00:37:33.637286   29460 retry.go:31] will retry after 1m21.92947248s: Temporary Error: unexpected response code: 503
I1217 00:38:55.572542   29460 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[28628049-737f-4659-891f-76f004a3be0b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Wed, 17 Dec 2025 00:38:55 GMT]] Body:0xc00143c080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cc500 TLS:<nil>}
I1217 00:38:55.572615   29460 retry.go:31] will retry after 1m6.006927748s: Temporary Error: unexpected response code: 503
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-698418 -n functional-698418
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs -n 25: (1.282588761s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-698418 ssh findmnt -T /mount1                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh            │ functional-698418 ssh findmnt -T /mount2                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh            │ functional-698418 ssh findmnt -T /mount3                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ mount          │ -p functional-698418 --kill=true                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons         │ functional-698418 addons list                                                                                                               │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ addons         │ functional-698418 addons list -o json                                                                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ start          │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start          │ -p functional-698418 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start          │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-698418 --alsologtostderr -v=1                                                                              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ service        │ functional-698418 service list                                                                                                              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ service        │ functional-698418 service list -o json                                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ update-context │ functional-698418 update-context --alsologtostderr -v=2                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ update-context │ functional-698418 update-context --alsologtostderr -v=2                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ update-context │ functional-698418 update-context --alsologtostderr -v=2                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format short --alsologtostderr                                                                                 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format yaml --alsologtostderr                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ ssh            │ functional-698418 ssh pgrep buildkitd                                                                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ service        │ functional-698418 service --namespace=default --https --url hello-node                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ image          │ functional-698418 image build -t localhost/my-image:functional-698418 testdata/build --alsologtostderr                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ service        │ functional-698418 service hello-node --url --format={{.IP}}                                                                                 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ service        │ functional-698418 service hello-node --url                                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ image          │ functional-698418 image ls                                                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format json --alsologtostderr                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format table --alsologtostderr                                                                                 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:34:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:34:58.017344   29429 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:58.017575   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017583   29429 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:58.017587   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017835   29429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:34:58.018256   29429 out.go:368] Setting JSON to false
	I1217 00:34:58.019096   29429 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4644,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:34:58.019154   29429 start.go:143] virtualization: kvm guest
	I1217 00:34:58.021244   29429 out.go:179] * [functional-698418] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:34:58.022849   29429 notify.go:221] Checking for updates...
	I1217 00:34:58.022887   29429 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:34:58.024449   29429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:34:58.025969   29429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:34:58.027336   29429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:34:58.028756   29429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:34:58.030134   29429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:34:58.031812   29429 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:34:58.032278   29429 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:34:58.062597   29429 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:34:58.063991   29429 start.go:309] selected driver: kvm2
	I1217 00:34:58.064002   29429 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:34:58.064121   29429 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:34:58.066098   29429 out.go:203] 
	W1217 00:34:58.067330   29429 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:34:58.068510   29429 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.846208449Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ec80cb4-8bb6-4e8b-b492-a9c996defddb name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.876180865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0611993f-07c0-4a28-ae19-59149c7a6648 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.876251481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0611993f-07c0-4a28-ae19-59149c7a6648 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.877950332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a17ad38e-59d8-46e4-b1e5-765bc3eb70d8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.878672389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931998878641612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189829,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a17ad38e-59d8-46e4-b1e5-765bc3eb70d8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.879614904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33b62c4a-a53f-4b97-b343-1d76c4a4b8c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.879668769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33b62c4a-a53f-4b97-b343-1d76c4a4b8c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.879897148Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33b62c4a-a53f-4b97-b343-1d76c4a4b8c1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.912731952Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c73e6e5-7b4f-41e0-b26b-b4d3c27a3c00 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.912844712Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c73e6e5-7b4f-41e0-b26b-b4d3c27a3c00 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.913122181Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2dabd6e9-a892-40b9-bd4b-10f139d8ec04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.914185596Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069417254183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898956791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system
,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069256290564,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:24:28.898955389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069240460749,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898961527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2
a9045876df478aae3a7b636723bc540,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065589722246,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:24:24.906954007Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-698418,Uid:1ce7e9b17a3dd76e454bf214ca11d85f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765931065589245770,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.109:8441,kubernetes.io/config.hash: 1ce7e9b17a3dd76e454bf214ca11d85f,kubernetes.io/config.seen: 2025-12-17T00:24:24.906952914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065588791216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a
712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:24:24.906945395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970318176395,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094813101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system,Attempt:2,},Stat
e:SANDBOX_NOTREADY,CreatedAt:1765930970314314045,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"
/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:21:37.094811769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970020012264,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094809600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2a9045876df4
78aae3a7b636723bc540,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768451249660,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:18:37.825797754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-698418,Uid:390b595ba70cd6ac1adab7b4d760d832,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768338346727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 390b595ba70cd6ac1adab7b4d760d832,kubernetes.io/config.seen: 2025-12-17T00:18:37.825801625Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768238266949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:18:37.82580282
0Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2dabd6e9-a892-40b9-bd4b-10f139d8ec04 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.915648903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18fd6aa5-c551-4fa6-8bee-8709636ef160 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.915719879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18fd6aa5-c551-4fa6-8bee-8709636ef160 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.916705133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18fd6aa5-c551-4fa6-8bee-8709636ef160 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.919332152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=865228b5-6140-4efd-b556-dbcb6bf1eb34 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.920050423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931998920025734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189829,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=865228b5-6140-4efd-b556-dbcb6bf1eb34 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.921801712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ef582d7-73b7-494d-ab21-9a827091af08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.921896932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ef582d7-73b7-494d-ab21-9a827091af08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.922101640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ef582d7-73b7-494d-ab21-9a827091af08 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.939078316Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=443a9f2c-f83c-468f-ae26-0fbadafac8a7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.939261964Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069417254183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898956791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system
,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069256290564,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:24:28.898955389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069240460749,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898961527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2
a9045876df478aae3a7b636723bc540,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065589722246,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:24:24.906954007Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-698418,Uid:1ce7e9b17a3dd76e454bf214ca11d85f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765931065589245770,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.109:8441,kubernetes.io/config.hash: 1ce7e9b17a3dd76e454bf214ca11d85f,kubernetes.io/config.seen: 2025-12-17T00:24:24.906952914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065588791216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a
712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:24:24.906945395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=443a9f2c-f83c-468f-ae26-0fbadafac8a7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.939983137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8178fcec-d33d-4668-b708-4d31459ef167 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.940040201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8178fcec-d33d-4668-b708-4d31459ef167 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:39:58 functional-698418 crio[6296]: time="2025-12-17 00:39:58.940171210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8178fcec-d33d-4668-b708-4d31459ef167 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b5f5abaf95cb2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   15 minutes ago      Running             coredns                   3                   45138ffa0bb91       coredns-7d764666f9-p2tnv                    kube-system
	eeabbda62f1da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       3                   a09572a7b0786       storage-provisioner                         kube-system
	8b30d8f3ed892       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   15 minutes ago      Running             kube-proxy                3                   1fbefd3a7421f       kube-proxy-qmz66                            kube-system
	62c820b9f36a9       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   15 minutes ago      Running             kube-apiserver            0                   cb65bcef9b7f3       kube-apiserver-functional-698418            kube-system
	7787544c3b26b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   15 minutes ago      Running             kube-controller-manager   3                   5a541b1e17042       kube-controller-manager-functional-698418   kube-system
	6305bb233aef9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   15 minutes ago      Running             etcd                      3                   47788f5b505e2       etcd-functional-698418                      kube-system
	fe278b7670e03       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   17 minutes ago      Exited              coredns                   2                   167bfbac01f7e       coredns-7d764666f9-p2tnv                    kube-system
	25dad2630a2cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Exited              storage-provisioner       2                   bca5c63a70f55       storage-provisioner                         kube-system
	6da27c7e1968f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   17 minutes ago      Exited              kube-proxy                2                   6e560eef5590b       kube-proxy-qmz66                            kube-system
	4ccf8afdca857       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 minutes ago      Exited              etcd                      2                   5b1069943f833       etcd-functional-698418                      kube-system
	95a7023d7b964       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   18 minutes ago      Exited              kube-scheduler            2                   affe536f1f44e       kube-scheduler-functional-698418            kube-system
	089ad298c6676       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   18 minutes ago      Exited              kube-controller-manager   2                   d4cac43b8d396       kube-controller-manager-functional-698418   kube-system
	
	
	==> coredns [b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53680 - 57417 "HINFO IN 5216687169014558221.4342564943848837697. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019890703s
	
	
	==> coredns [fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35643 - 58571 "HINFO IN 8723388857180390004.4112128438720857375. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021681051s
	
	
	==> describe nodes <==
	Name:               functional-698418
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-698418
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=functional-698418
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_18_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:18:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-698418
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:39:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    functional-698418
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd443fff9c14f00b485986b75d25594
	  System UUID:                4dd443ff-f9c1-4f00-b485-986b75d25594
	  Boot ID:                    cfa996e4-9a58-45c9-b4e8-fda78786a8ea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p2tnv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     21m
	  kube-system                 etcd-functional-698418                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         21m
	  kube-system                 kube-apiserver-functional-698418             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-698418    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-qmz66                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-functional-698418             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  15m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	
	
	==> dmesg <==
	[Dec17 00:18] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001752] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001838] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.180772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097371] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.150666] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.082933] kauditd_printk_skb: 18 callbacks suppressed
	[  +3.378145] kauditd_printk_skb: 296 callbacks suppressed
	[Dec17 00:19] kauditd_printk_skb: 350 callbacks suppressed
	[Dec17 00:21] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.190827] kauditd_printk_skb: 57 callbacks suppressed
	[Dec17 00:22] kauditd_printk_skb: 12 callbacks suppressed
	[Dec17 00:24] kauditd_printk_skb: 254 callbacks suppressed
	[  +4.295954] kauditd_printk_skb: 154 callbacks suppressed
	[Dec17 00:25] kauditd_printk_skb: 134 callbacks suppressed
	[Dec17 00:28] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.344537] kauditd_printk_skb: 14 callbacks suppressed
	[Dec17 00:34] kauditd_printk_skb: 2 callbacks suppressed
	[Dec17 00:38] crun[9818]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.923913] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0] <==
	{"level":"warn","ts":"2025-12-17T00:21:35.657581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.667886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.672758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.684396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.706567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.715915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.760731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:22:44.130881Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T00:22:44.130978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	{"level":"error","ts":"2025-12-17T00:22:44.131075Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221581Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.221712Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"22872ffef731375a","current-leader-member-id":"22872ffef731375a"}
	{"level":"info","ts":"2025-12-17T00:22:44.221792Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T00:22:44.221802Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T00:22:44.221922Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222103Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222119Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222160Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225852Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"error","ts":"2025-12-17T00:22:44.225922Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225945Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2025-12-17T00:22:44.225950Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	
	
	==> etcd [6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630] <==
	{"level":"warn","ts":"2025-12-17T00:24:27.337374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.345255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.354619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.365619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.374882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.383058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.391645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.400215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.408059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.415237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.422074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.440217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.454486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.462999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.477140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.484854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.493591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.501969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.546639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:34:26.939994Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1007}
	{"level":"info","ts":"2025-12-17T00:34:26.965323Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1007,"took":"23.382262ms","hash":791219146,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1105920,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-12-17T00:34:26.965399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":791219146,"revision":1007,"compact-revision":-1}
	{"level":"info","ts":"2025-12-17T00:39:26.948677Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1263}
	{"level":"info","ts":"2025-12-17T00:39:26.952857Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1263,"took":"3.836141ms","hash":1911952928,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-17T00:39:26.952893Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1911952928,"revision":1263,"compact-revision":1007}
	
	
	==> kernel <==
	 00:39:59 up 21 min,  0 users,  load average: 0.08, 0.11, 0.13
	Linux functional-698418 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972] <==
	I1217 00:24:28.281042       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:24:28.281300       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.281352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 00:24:28.282140       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.282199       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.283545       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 00:24:28.292001       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:24:28.294125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:24:28.967261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:24:29.085463       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:24:30.326310       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:24:30.386614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:24:30.419123       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:24:30.426405       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:24:31.694480       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:24:31.744240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:28:41.476382       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.113.226"}
	I1217 00:28:45.404723       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.246.234"}
	I1217 00:28:45.460397       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:28:46.160211       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.111.117"}
	I1217 00:32:56.403257       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.89.136"}
	I1217 00:34:28.200355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:34:58.933078       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:34:59.188964       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.88.77"}
	I1217 00:34:59.214738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.7.189"}
	
	
	==> kube-controller-manager [089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1] <==
	I1217 00:21:39.058814       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063583       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063639       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063668       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064068       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064411       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064486       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064629       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064697       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064728       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064803       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064959       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.065112       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.066039       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.067219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:21:39.067679       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068598       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068694       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068784       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.069012       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.073643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154923       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154942       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:21:39.154946       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:21:39.168785       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692] <==
	I1217 00:24:31.474366       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474425       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.475861       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:24:31.476235       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-698418"
	I1217 00:24:31.474433       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472707       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472720       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472725       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.476353       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:24:31.474449       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472698       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474439       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478289       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474444       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478338       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.541643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.751306       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1217 00:34:59.037812       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.048787       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.057113       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.065963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.074857       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.074954       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.083829       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.091457       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab] <==
	I1217 00:22:50.913830       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:22:51.014579       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2] <==
	I1217 00:24:29.960125       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:24:30.061466       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:30.061553       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E1217 00:24:30.061659       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:24:30.113793       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 00:24:30.114041       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 00:24:30.114249       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:24:30.163601       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:24:30.164393       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:24:30.164446       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:24:30.181144       1 config.go:200] "Starting service config controller"
	I1217 00:24:30.185060       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:24:30.185270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:24:30.185303       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:24:30.185307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:24:30.181175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:24:30.198089       1 config.go:309] "Starting node config controller"
	I1217 00:24:30.198119       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:24:30.287379       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:24:30.289567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:24:30.289579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:24:30.298372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b] <==
	E1217 00:21:36.448305       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:21:36.449061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 00:21:36.449576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 00:21:36.453581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 00:21:36.448485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 00:21:36.449996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 00:21:36.450331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 00:21:36.451038       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 00:21:36.451413       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 00:21:36.451819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 00:21:36.453319       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 00:21:36.449614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 00:21:36.453890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 00:21:36.454295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 00:21:36.459695       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 00:21:36.459774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 00:21:36.459849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 00:21:36.460344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	I1217 00:21:39.389301       1 shared_informer.go:377] "Caches are synced"
	I1217 00:22:44.142866       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 00:22:44.142910       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 00:22:44.142920       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 00:22:44.142981       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:22:44.143070       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 00:22:44.143163       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.052178    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod390b595ba70cd6ac1adab7b4d760d832/crio-affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd: Error finding container affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd: Status 404 returned error can't find the container with id affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.052600    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod375f206a-98e8-4a86-b794-274b2ac5d46d/crio-167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706: Error finding container 167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706: Status 404 returned error can't find the container with id 167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.053013    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod2a9045876df478aae3a7b636723bc540/crio-d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285: Error finding container d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285: Status 404 returned error can't find the container with id d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.053302    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda712d99056792744476561e1a0361d20/crio-5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150: Error finding container 5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150: Status 404 returned error can't find the container with id 5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.053736    6659 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod380aa506-9f03-4398-8d13-ac938ed6953c/crio-6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f: Error finding container 6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f: Status 404 returned error can't find the container with id 6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.281127    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931965280891796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:25 functional-698418 kubelet[6659]: E1217 00:39:25.281268    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931965280891796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:27 functional-698418 kubelet[6659]: E1217 00:39:27.938472    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-698418" containerName="kube-controller-manager"
	Dec 17 00:39:31 functional-698418 kubelet[6659]: E1217 00:39:31.938265    6659 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p2tnv" containerName="coredns"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.282939    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931975282453811  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.282956    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931975282453811  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.938400    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.949816    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.949906    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.949943    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:39:35 functional-698418 kubelet[6659]: E1217 00:39:35.950029    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:39:45 functional-698418 kubelet[6659]: E1217 00:39:45.286045    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931985285539136  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:45 functional-698418 kubelet[6659]: E1217 00:39:45.286129    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931985285539136  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:50 functional-698418 kubelet[6659]: E1217 00:39:50.939169    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:39:50 functional-698418 kubelet[6659]: E1217 00:39:50.950569    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:39:50 functional-698418 kubelet[6659]: E1217 00:39:50.950626    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:39:50 functional-698418 kubelet[6659]: E1217 00:39:50.950641    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:39:50 functional-698418 kubelet[6659]: E1217 00:39:50.950708    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:39:55 functional-698418 kubelet[6659]: E1217 00:39:55.289002    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931995288269245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:39:55 functional-698418 kubelet[6659]: E1217 00:39:55.289043    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931995288269245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	
	
	==> storage-provisioner [25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446] <==
	I1217 00:22:51.076154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:22:51.080587       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216] <==
	W1217 00:39:34.041110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:36.045385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:36.054322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:38.057942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:38.066215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:40.069964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:40.074911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:42.078608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:42.083917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:44.087221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:44.096031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:46.099785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:46.105793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:48.109044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:48.118470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:50.122713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:50.127661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:52.131684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:52.140444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:54.144591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:54.150020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:56.154027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:56.159233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:58.163094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:39:58.167922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
helpers_test.go:270: (dbg) Run:  kubectl --context functional-698418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k: exit status 1 (107.853148ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7t2t (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p7t2t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-z5vc8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rkgzf (ro)
	Volumes:
	  kube-api-access-rkgzf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-n5xgg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj79f (ro)
	Volumes:
	  kube-api-access-pj79f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-m98rn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wznv (ro)
	Volumes:
	  kube-api-access-4wznv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsvrf (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsvrf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-wpflw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6dx8k" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k: exit status 1
E1217 00:42:12.864342   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (302.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-698418 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-698418 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-n5xgg" [9f8e8a16-c84b-4f48-8ea4-3333e17977ad] Pending
E1217 00:33:44.661040   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-17 00:42:56.660394214 +0000 UTC m=+2203.363321387
functional_test.go:1645: (dbg) Run:  kubectl --context functional-698418 describe po hello-node-connect-9f67c86d4-n5xgg -n default
functional_test.go:1645: (dbg) kubectl --context functional-698418 describe po hello-node-connect-9f67c86d4-n5xgg -n default:
Name:             hello-node-connect-9f67c86d4-n5xgg
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj79f (ro)
Volumes:
kube-api-access-pj79f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1645: (dbg) Run:  kubectl --context functional-698418 logs hello-node-connect-9f67c86d4-n5xgg -n default
functional_test.go:1645: (dbg) kubectl --context functional-698418 logs hello-node-connect-9f67c86d4-n5xgg -n default:
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-698418 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-n5xgg
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj79f (ro)
Volumes:
kube-api-access-pj79f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-698418 logs -l app=hello-node-connect
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-698418 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.89.136
IPs:                      10.97.89.136
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30893/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-698418 -n functional-698418
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs -n 25: (1.296286871s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                    ARGS                                                                     │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-698418 ssh findmnt -T /mount1                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh            │ functional-698418 ssh findmnt -T /mount2                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh            │ functional-698418 ssh findmnt -T /mount3                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ mount          │ -p functional-698418 --kill=true                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons         │ functional-698418 addons list                                                                                                               │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ addons         │ functional-698418 addons list -o json                                                                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ start          │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start          │ -p functional-698418 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start          │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-698418 --alsologtostderr -v=1                                                                              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ service        │ functional-698418 service list                                                                                                              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ service        │ functional-698418 service list -o json                                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ update-context │ functional-698418 update-context --alsologtostderr -v=2                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ update-context │ functional-698418 update-context --alsologtostderr -v=2                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ update-context │ functional-698418 update-context --alsologtostderr -v=2                                                                                     │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format short --alsologtostderr                                                                                 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format yaml --alsologtostderr                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ ssh            │ functional-698418 ssh pgrep buildkitd                                                                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ service        │ functional-698418 service --namespace=default --https --url hello-node                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ image          │ functional-698418 image build -t localhost/my-image:functional-698418 testdata/build --alsologtostderr                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ service        │ functional-698418 service hello-node --url --format={{.IP}}                                                                                 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ service        │ functional-698418 service hello-node --url                                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │                     │
	│ image          │ functional-698418 image ls                                                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format json --alsologtostderr                                                                                  │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	│ image          │ functional-698418 image ls --format table --alsologtostderr                                                                                 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:38 UTC │ 17 Dec 25 00:38 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:34:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:34:58.017344   29429 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:58.017575   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017583   29429 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:58.017587   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017835   29429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:34:58.018256   29429 out.go:368] Setting JSON to false
	I1217 00:34:58.019096   29429 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4644,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:34:58.019154   29429 start.go:143] virtualization: kvm guest
	I1217 00:34:58.021244   29429 out.go:179] * [functional-698418] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:34:58.022849   29429 notify.go:221] Checking for updates...
	I1217 00:34:58.022887   29429 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:34:58.024449   29429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:34:58.025969   29429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:34:58.027336   29429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:34:58.028756   29429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:34:58.030134   29429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:34:58.031812   29429 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:34:58.032278   29429 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:34:58.062597   29429 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:34:58.063991   29429 start.go:309] selected driver: kvm2
	I1217 00:34:58.064002   29429 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:34:58.064121   29429 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:34:58.066098   29429 out.go:203] 
	W1217 00:34:58.067330   29429 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:34:58.068510   29429 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.618785344Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069417254183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898956791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system
,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069256290564,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:24:28.898955389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069240460749,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898961527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2
a9045876df478aae3a7b636723bc540,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065589722246,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:24:24.906954007Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-698418,Uid:1ce7e9b17a3dd76e454bf214ca11d85f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765931065589245770,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.109:8441,kubernetes.io/config.hash: 1ce7e9b17a3dd76e454bf214ca11d85f,kubernetes.io/config.seen: 2025-12-17T00:24:24.906952914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065588791216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a
712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:24:24.906945395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970318176395,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094813101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system,Attempt:2,},Stat
e:SANDBOX_NOTREADY,CreatedAt:1765930970314314045,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"
/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:21:37.094811769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970020012264,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094809600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2a9045876df4
78aae3a7b636723bc540,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768451249660,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:18:37.825797754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-698418,Uid:390b595ba70cd6ac1adab7b4d760d832,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768338346727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 390b595ba70cd6ac1adab7b4d760d832,kubernetes.io/config.seen: 2025-12-17T00:18:37.825801625Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768238266949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:18:37.82580282
0Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c413ec5e-169f-425f-8c5f-798e70b47da0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.619351333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=200a9db2-f2ba-47fa-a999-3865ea9d54f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.619396966Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=200a9db2-f2ba-47fa-a999-3865ea9d54f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.619664931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=200a9db2-f2ba-47fa-a999-3865ea9d54f6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.654336548Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc4d24cc-9c41-46b4-bd7a-b5768a616e3c name=/runtime.v1.RuntimeService/Version
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.654444497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc4d24cc-9c41-46b4-bd7a-b5768a616e3c name=/runtime.v1.RuntimeService/Version
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.655927443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=971572f6-84dd-4f17-87be-794ceb1be173 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.656557357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765932177656476253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189829,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=971572f6-84dd-4f17-87be-794ceb1be173 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.657454404Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bff43338-4669-4acb-ade4-6aedff32088d name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.657735031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bff43338-4669-4acb-ade4-6aedff32088d name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.658297096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bff43338-4669-4acb-ade4-6aedff32088d name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.689588155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f35bb45-720f-4530-83b4-17e27f2d0ec1 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.689663118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f35bb45-720f-4530-83b4-17e27f2d0ec1 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.691127279Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9850ecb-e1d9-41a6-9cfb-0284337684e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.692204570Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765932177692170045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189829,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9850ecb-e1d9-41a6-9cfb-0284337684e4 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.692883092Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97faff4f-7df6-4bb6-a136-9964716959a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.692945235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97faff4f-7df6-4bb6-a136-9964716959a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.693180681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97faff4f-7df6-4bb6-a136-9964716959a7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.726262922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e981769d-1673-4369-bc86-ee3228b5f7be name=/runtime.v1.RuntimeService/Version
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.726340409Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e981769d-1673-4369-bc86-ee3228b5f7be name=/runtime.v1.RuntimeService/Version
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.727851678Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac03cbd3-99f0-47f2-b4d1-699a7ab94577 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.728404468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765932177728383713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:189829,},InodesUsed:&UInt64Value{Value:89,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac03cbd3-99f0-47f2-b4d1-699a7ab94577 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.729710372Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4abd4956-ddd8-481c-a5ff-4092b636f152 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.729918361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4abd4956-ddd8-481c-a5ff-4092b636f152 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:42:57 functional-698418 crio[6296]: time="2025-12-17 00:42:57.730566612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4abd4956-ddd8-481c-a5ff-4092b636f152 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b5f5abaf95cb2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   18 minutes ago      Running             coredns                   3                   45138ffa0bb91       coredns-7d764666f9-p2tnv                    kube-system
	eeabbda62f1da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       3                   a09572a7b0786       storage-provisioner                         kube-system
	8b30d8f3ed892       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   18 minutes ago      Running             kube-proxy                3                   1fbefd3a7421f       kube-proxy-qmz66                            kube-system
	62c820b9f36a9       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   18 minutes ago      Running             kube-apiserver            0                   cb65bcef9b7f3       kube-apiserver-functional-698418            kube-system
	7787544c3b26b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   18 minutes ago      Running             kube-controller-manager   3                   5a541b1e17042       kube-controller-manager-functional-698418   kube-system
	6305bb233aef9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 minutes ago      Running             etcd                      3                   47788f5b505e2       etcd-functional-698418                      kube-system
	fe278b7670e03       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   20 minutes ago      Exited              coredns                   2                   167bfbac01f7e       coredns-7d764666f9-p2tnv                    kube-system
	25dad2630a2cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   20 minutes ago      Exited              storage-provisioner       2                   bca5c63a70f55       storage-provisioner                         kube-system
	6da27c7e1968f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   20 minutes ago      Exited              kube-proxy                2                   6e560eef5590b       kube-proxy-qmz66                            kube-system
	4ccf8afdca857       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   21 minutes ago      Exited              etcd                      2                   5b1069943f833       etcd-functional-698418                      kube-system
	95a7023d7b964       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   21 minutes ago      Exited              kube-scheduler            2                   affe536f1f44e       kube-scheduler-functional-698418            kube-system
	089ad298c6676       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   21 minutes ago      Exited              kube-controller-manager   2                   d4cac43b8d396       kube-controller-manager-functional-698418   kube-system
	
	
	==> coredns [b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53680 - 57417 "HINFO IN 5216687169014558221.4342564943848837697. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019890703s
	
	
	==> coredns [fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35643 - 58571 "HINFO IN 8723388857180390004.4112128438720857375. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021681051s
	
	
	==> describe nodes <==
	Name:               functional-698418
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-698418
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=functional-698418
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_18_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:18:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-698418
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:42:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:39:04 +0000   Wed, 17 Dec 2025 00:18:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    functional-698418
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd443fff9c14f00b485986b75d25594
	  System UUID:                4dd443ff-f9c1-4f00-b485-986b75d25594
	  Boot ID:                    cfa996e4-9a58-45c9-b4e8-fda78786a8ea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p2tnv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     24m
	  kube-system                 etcd-functional-698418                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         24m
	  kube-system                 kube-apiserver-functional-698418             250m (12%)    0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kube-controller-manager-functional-698418    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-qmz66                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-functional-698418             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  24m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  21m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	
	
	==> dmesg <==
	[Dec17 00:18] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001752] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001838] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.180772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097371] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.150666] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.082933] kauditd_printk_skb: 18 callbacks suppressed
	[  +3.378145] kauditd_printk_skb: 296 callbacks suppressed
	[Dec17 00:19] kauditd_printk_skb: 350 callbacks suppressed
	[Dec17 00:21] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.190827] kauditd_printk_skb: 57 callbacks suppressed
	[Dec17 00:22] kauditd_printk_skb: 12 callbacks suppressed
	[Dec17 00:24] kauditd_printk_skb: 254 callbacks suppressed
	[  +4.295954] kauditd_printk_skb: 154 callbacks suppressed
	[Dec17 00:25] kauditd_printk_skb: 134 callbacks suppressed
	[Dec17 00:28] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.344537] kauditd_printk_skb: 14 callbacks suppressed
	[Dec17 00:34] kauditd_printk_skb: 2 callbacks suppressed
	[Dec17 00:38] crun[9818]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +2.923913] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0] <==
	{"level":"warn","ts":"2025-12-17T00:21:35.657581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.667886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.672758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.684396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.706567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.715915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.760731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:22:44.130881Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T00:22:44.130978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	{"level":"error","ts":"2025-12-17T00:22:44.131075Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221581Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.221712Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"22872ffef731375a","current-leader-member-id":"22872ffef731375a"}
	{"level":"info","ts":"2025-12-17T00:22:44.221792Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T00:22:44.221802Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T00:22:44.221922Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222103Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222119Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222160Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225852Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"error","ts":"2025-12-17T00:22:44.225922Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225945Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2025-12-17T00:22:44.225950Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	
	
	==> etcd [6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630] <==
	{"level":"warn","ts":"2025-12-17T00:24:27.337374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.345255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.354619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.365619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.374882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.383058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.391645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.400215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.408059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.415237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.422074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.440217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.454486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.462999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.477140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.484854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.493591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.501969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.546639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:34:26.939994Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1007}
	{"level":"info","ts":"2025-12-17T00:34:26.965323Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1007,"took":"23.382262ms","hash":791219146,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1105920,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-12-17T00:34:26.965399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":791219146,"revision":1007,"compact-revision":-1}
	{"level":"info","ts":"2025-12-17T00:39:26.948677Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1263}
	{"level":"info","ts":"2025-12-17T00:39:26.952857Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1263,"took":"3.836141ms","hash":1911952928,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1781760,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-12-17T00:39:26.952893Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1911952928,"revision":1263,"compact-revision":1007}
	
	
	==> kernel <==
	 00:42:58 up 24 min,  0 users,  load average: 0.36, 0.23, 0.17
	Linux functional-698418 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972] <==
	I1217 00:24:28.281042       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:24:28.281300       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.281352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 00:24:28.282140       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.282199       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.283545       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 00:24:28.292001       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:24:28.294125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:24:28.967261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:24:29.085463       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:24:30.326310       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:24:30.386614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:24:30.419123       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:24:30.426405       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:24:31.694480       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:24:31.744240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:28:41.476382       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.113.226"}
	I1217 00:28:45.404723       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.246.234"}
	I1217 00:28:45.460397       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:28:46.160211       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.111.117"}
	I1217 00:32:56.403257       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.89.136"}
	I1217 00:34:28.200355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:34:58.933078       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:34:59.188964       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.88.77"}
	I1217 00:34:59.214738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.7.189"}
	
	
	==> kube-controller-manager [089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1] <==
	I1217 00:21:39.058814       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063583       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063639       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063668       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064068       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064411       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064486       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064629       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064697       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064728       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064803       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064959       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.065112       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.066039       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.067219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:21:39.067679       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068598       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068694       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068784       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.069012       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.073643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154923       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154942       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:21:39.154946       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:21:39.168785       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692] <==
	I1217 00:24:31.474366       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474425       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.475861       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:24:31.476235       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-698418"
	I1217 00:24:31.474433       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472707       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472720       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472725       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.476353       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:24:31.474449       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472698       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474439       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478289       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474444       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478338       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.541643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.751306       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1217 00:34:59.037812       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.048787       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.057113       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.065963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.074857       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.074954       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.083829       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.091457       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab] <==
	I1217 00:22:50.913830       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:22:51.014579       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2] <==
	I1217 00:24:29.960125       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:24:30.061466       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:30.061553       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E1217 00:24:30.061659       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:24:30.113793       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 00:24:30.114041       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 00:24:30.114249       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:24:30.163601       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:24:30.164393       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:24:30.164446       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:24:30.181144       1 config.go:200] "Starting service config controller"
	I1217 00:24:30.185060       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:24:30.185270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:24:30.185303       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:24:30.185307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:24:30.181175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:24:30.198089       1 config.go:309] "Starting node config controller"
	I1217 00:24:30.198119       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:24:30.287379       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:24:30.289567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:24:30.289579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:24:30.298372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b] <==
	E1217 00:21:36.448305       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:21:36.449061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 00:21:36.449576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 00:21:36.453581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 00:21:36.448485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 00:21:36.449996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 00:21:36.450331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 00:21:36.451038       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 00:21:36.451413       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 00:21:36.451819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 00:21:36.453319       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 00:21:36.449614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 00:21:36.453890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 00:21:36.454295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 00:21:36.459695       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 00:21:36.459774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 00:21:36.459849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 00:21:36.460344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	I1217 00:21:39.389301       1 shared_informer.go:377] "Caches are synced"
	I1217 00:22:44.142866       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 00:22:44.142910       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 00:22:44.142920       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 00:22:44.142981       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:22:44.143070       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 00:22:44.143163       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 00:42:25 functional-698418 kubelet[6659]: E1217 00:42:25.321789    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765932145321019418  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:25 functional-698418 kubelet[6659]: E1217 00:42:25.322240    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765932145321019418  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:31 functional-698418 kubelet[6659]: E1217 00:42:31.938709    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:42:31 functional-698418 kubelet[6659]: E1217 00:42:31.948450    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:42:31 functional-698418 kubelet[6659]: E1217 00:42:31.948492    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:42:31 functional-698418 kubelet[6659]: E1217 00:42:31.948556    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:42:31 functional-698418 kubelet[6659]: E1217 00:42:31.948600    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:42:35 functional-698418 kubelet[6659]: E1217 00:42:35.324211    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765932155323653818  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:35 functional-698418 kubelet[6659]: E1217 00:42:35.324233    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765932155323653818  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:42 functional-698418 kubelet[6659]: E1217 00:42:42.939085    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-698418" containerName="kube-apiserver"
	Dec 17 00:42:44 functional-698418 kubelet[6659]: E1217 00:42:44.938331    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:42:44 functional-698418 kubelet[6659]: E1217 00:42:44.950068    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:42:44 functional-698418 kubelet[6659]: E1217 00:42:44.950126    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:42:44 functional-698418 kubelet[6659]: E1217 00:42:44.950142    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:42:44 functional-698418 kubelet[6659]: E1217 00:42:44.950192    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:42:45 functional-698418 kubelet[6659]: E1217 00:42:45.327129    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765932165326443445  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:45 functional-698418 kubelet[6659]: E1217 00:42:45.327151    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765932165326443445  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:50 functional-698418 kubelet[6659]: E1217 00:42:50.938487    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-698418" containerName="etcd"
	Dec 17 00:42:55 functional-698418 kubelet[6659]: E1217 00:42:55.329038    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765932175328494898  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:55 functional-698418 kubelet[6659]: E1217 00:42:55.329061    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765932175328494898  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:189829}  inodes_used:{value:89}}"
	Dec 17 00:42:56 functional-698418 kubelet[6659]: E1217 00:42:56.938460    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:42:56 functional-698418 kubelet[6659]: E1217 00:42:56.952067    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:42:56 functional-698418 kubelet[6659]: E1217 00:42:56.952112    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:42:56 functional-698418 kubelet[6659]: E1217 00:42:56.952126    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:42:56 functional-698418 kubelet[6659]: E1217 00:42:56.952179    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	
	
	==> storage-provisioner [25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446] <==
	I1217 00:22:51.076154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:22:51.080587       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216] <==
	W1217 00:42:32.995790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:34.999810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:35.009712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:37.013803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:37.019993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:39.023633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:39.028775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:41.032357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:41.037898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:43.041918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:43.047532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:45.051240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:45.061954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:47.065923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:47.071662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:49.075424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:49.081220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:51.084395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:51.089701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:53.093580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:53.102460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:55.106346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:55.111177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:57.115455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:42:57.124750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
helpers_test.go:270: (dbg) Run:  kubectl --context functional-698418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k: exit status 1 (99.149401ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7t2t (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p7t2t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-z5vc8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rkgzf (ro)
	Volumes:
	  kube-api-access-rkgzf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-n5xgg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj79f (ro)
	Volumes:
	  kube-api-access-pj79f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-m98rn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wznv (ro)
	Volumes:
	  kube-api-access-4wznv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsvrf (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsvrf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-wpflw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6dx8k" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (369.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [8aeec296-0f7d-489d-88c0-1a8f24bcdb27] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004862469s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-698418 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-698418 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-698418 get pvc myclaim -o=json
I1217 00:28:53.475681   17074 retry.go:31] will retry after 2.059531562s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:ddbe4b7d-1342-4b1c-a39a-a0e53b6f9ceb ResourceVersion:976 Generation:0 CreationTimestamp:2025-12-17 00:28:53 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-ddbe4b7d-1342-4b1c-a39a-a0e53b6f9ceb StorageClassName:0xc001f9f080 VolumeMode:0xc001f9f090 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-698418 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-698418 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:28:55.718498   17074 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [122015fd-0e79-408c-a688-b2b62f8f65dd] Pending
E1217 00:30:07.728161   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:32:12.864078   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-17 00:34:55.95125009 +0000 UTC m=+1722.654177194
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-698418 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-698418 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Image:        public.ecr.aws/nginx/nginx:alpine
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsvrf (ro)
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-wsvrf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-698418 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-698418 logs sp-pod -n default:
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-698418 -n functional-698418
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs -n 25: (1.265842066s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-698418 image save --daemon kicbase/echo-server:functional-698418 --alsologtostderr                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ mount   │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001:/mount-9p --alsologtostderr -v=1              │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ ssh     │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ ssh     │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ ssh     │ functional-698418 ssh -- ls -la /mount-9p                                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ ssh     │ functional-698418 ssh cat /mount-9p/test-1765931331407799177                                                                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ ssh     │ functional-698418 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-698418 ssh sudo umount -f /mount-9p                                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh     │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2996149672/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh     │ functional-698418 ssh -- ls -la /mount-9p                                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh     │ functional-698418 ssh sudo umount -f /mount-9p                                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount1 --alsologtostderr -v=1                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount3 --alsologtostderr -v=1                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-698418 ssh findmnt -T /mount1                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount   │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount2 --alsologtostderr -v=1                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh     │ functional-698418 ssh findmnt -T /mount1                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh     │ functional-698418 ssh findmnt -T /mount2                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh     │ functional-698418 ssh findmnt -T /mount3                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ mount   │ -p functional-698418 --kill=true                                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons  │ functional-698418 addons list                                                                                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ addons  │ functional-698418 addons list -o json                                                                                                               │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ start   │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start   │ -p functional-698418 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                   │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:32:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:32:56.205587   28922 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:56.205863   28922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:56.205874   28922 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:56.205878   28922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:56.206100   28922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:32:56.206528   28922 out.go:368] Setting JSON to false
	I1217 00:32:56.207427   28922 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4522,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:32:56.207477   28922 start.go:143] virtualization: kvm guest
	I1217 00:32:56.209549   28922 out.go:179] * [functional-698418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:32:56.210852   28922 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:56.210938   28922 notify.go:221] Checking for updates...
	I1217 00:32:56.213397   28922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:56.214605   28922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:32:56.216107   28922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:32:56.217505   28922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:32:56.218909   28922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:56.220906   28922 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:56.221552   28922 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:56.253447   28922 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:32:56.254890   28922 start.go:309] selected driver: kvm2
	I1217 00:32:56.254901   28922 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:56.254984   28922 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:56.255821   28922 cni.go:84] Creating CNI manager for ""
	I1217 00:32:56.255882   28922 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:32:56.255915   28922 start.go:353] cluster config:
	{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:56.257361   28922 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.702730401Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35195cc8-f05e-47b7-9a36-254e86cd30c2 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.705557152Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a07a7cc-2d13-4755-ac1b-9922e3b0b14b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.706378776Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931696706352588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164169,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a07a7cc-2d13-4755-ac1b-9922e3b0b14b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.707419709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3f54babf-7f91-482d-a2fe-4aad6888a600 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.707622439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3f54babf-7f91-482d-a2fe-4aad6888a600 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.707856169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3f54babf-7f91-482d-a2fe-4aad6888a600 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.720044791Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=69573b0e-84c6-45e4-9c1c-8e67baf26049 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.720310236Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069417254183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898956791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system
,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069256290564,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:24:28.898955389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069240460749,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898961527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2
a9045876df478aae3a7b636723bc540,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065589722246,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:24:24.906954007Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-698418,Uid:1ce7e9b17a3dd76e454bf214ca11d85f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765931065589245770,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.109:8441,kubernetes.io/config.hash: 1ce7e9b17a3dd76e454bf214ca11d85f,kubernetes.io/config.seen: 2025-12-17T00:24:24.906952914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065588791216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a
712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:24:24.906945395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970318176395,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094813101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system,Attempt:2,},Stat
e:SANDBOX_NOTREADY,CreatedAt:1765930970314314045,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"
/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:21:37.094811769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970020012264,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094809600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2a9045876df4
78aae3a7b636723bc540,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768451249660,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:18:37.825797754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-698418,Uid:390b595ba70cd6ac1adab7b4d760d832,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768338346727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 390b595ba70cd6ac1adab7b4d760d832,kubernetes.io/config.seen: 2025-12-17T00:18:37.825801625Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768238266949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:18:37.82580282
0Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=69573b0e-84c6-45e4-9c1c-8e67baf26049 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.721272437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc5d0031-6f96-4a0e-a257-62e30b2922e7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.721349549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc5d0031-6f96-4a0e-a257-62e30b2922e7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.721849917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc5d0031-6f96-4a0e-a257-62e30b2922e7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.747878240Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd23a44a-0f00-490e-b019-1adf439b7c2c name=/runtime.v1.RuntimeService/Version
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.747978583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd23a44a-0f00-490e-b019-1adf439b7c2c name=/runtime.v1.RuntimeService/Version
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.749655290Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f09ba507-e046-42b6-9f57-bbcef7278a8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.750245188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931696750222198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164169,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f09ba507-e046-42b6-9f57-bbcef7278a8e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.751657887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6923721-0fbc-46c9-903e-25a83b848481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.751725613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6923721-0fbc-46c9-903e-25a83b848481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.752128932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6923721-0fbc-46c9-903e-25a83b848481 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.782282822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cca1706-5b9f-43ab-96cd-e08788b73d22 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.782716175Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cca1706-5b9f-43ab-96cd-e08788b73d22 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.785144608Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e5bb440-10fb-4515-9a1b-c49a33ff71cf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.785993141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931696785968775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164169,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e5bb440-10fb-4515-9a1b-c49a33ff71cf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.787262769Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf07a3c6-b4de-4698-8312-413b322db548 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.787317048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf07a3c6-b4de-4698-8312-413b322db548 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:34:56 functional-698418 crio[6296]: time="2025-12-17 00:34:56.787652395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf07a3c6-b4de-4698-8312-413b322db548 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b5f5abaf95cb2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   10 minutes ago      Running             coredns                   3                   45138ffa0bb91       coredns-7d764666f9-p2tnv                    kube-system
	eeabbda62f1da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       3                   a09572a7b0786       storage-provisioner                         kube-system
	8b30d8f3ed892       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   10 minutes ago      Running             kube-proxy                3                   1fbefd3a7421f       kube-proxy-qmz66                            kube-system
	62c820b9f36a9       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   10 minutes ago      Running             kube-apiserver            0                   cb65bcef9b7f3       kube-apiserver-functional-698418            kube-system
	7787544c3b26b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   10 minutes ago      Running             kube-controller-manager   3                   5a541b1e17042       kube-controller-manager-functional-698418   kube-system
	6305bb233aef9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   10 minutes ago      Running             etcd                      3                   47788f5b505e2       etcd-functional-698418                      kube-system
	fe278b7670e03       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   12 minutes ago      Exited              coredns                   2                   167bfbac01f7e       coredns-7d764666f9-p2tnv                    kube-system
	25dad2630a2cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Exited              storage-provisioner       2                   bca5c63a70f55       storage-provisioner                         kube-system
	6da27c7e1968f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   12 minutes ago      Exited              kube-proxy                2                   6e560eef5590b       kube-proxy-qmz66                            kube-system
	4ccf8afdca857       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   13 minutes ago      Exited              etcd                      2                   5b1069943f833       etcd-functional-698418                      kube-system
	95a7023d7b964       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   13 minutes ago      Exited              kube-scheduler            2                   affe536f1f44e       kube-scheduler-functional-698418            kube-system
	089ad298c6676       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   13 minutes ago      Exited              kube-controller-manager   2                   d4cac43b8d396       kube-controller-manager-functional-698418   kube-system
	
	
	==> coredns [b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53680 - 57417 "HINFO IN 5216687169014558221.4342564943848837697. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019890703s
	
	
	==> coredns [fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35643 - 58571 "HINFO IN 8723388857180390004.4112128438720857375. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021681051s
	
	
	==> describe nodes <==
	Name:               functional-698418
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-698418
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=functional-698418
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_18_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:18:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-698418
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:34:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    functional-698418
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd443fff9c14f00b485986b75d25594
	  System UUID:                4dd443ff-f9c1-4f00-b485-986b75d25594
	  Boot ID:                    cfa996e4-9a58-45c9-b4e8-fda78786a8ea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p2tnv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     16m
	  kube-system                 etcd-functional-698418                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         16m
	  kube-system                 kube-apiserver-functional-698418             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-698418    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-qmz66                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-functional-698418             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  16m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  13m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	
	
	==> dmesg <==
	[Dec17 00:18] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001752] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001838] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.180772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097371] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.150666] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.082933] kauditd_printk_skb: 18 callbacks suppressed
	[  +3.378145] kauditd_printk_skb: 296 callbacks suppressed
	[Dec17 00:19] kauditd_printk_skb: 350 callbacks suppressed
	[Dec17 00:21] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.190827] kauditd_printk_skb: 57 callbacks suppressed
	[Dec17 00:22] kauditd_printk_skb: 12 callbacks suppressed
	[Dec17 00:24] kauditd_printk_skb: 254 callbacks suppressed
	[  +4.295954] kauditd_printk_skb: 154 callbacks suppressed
	[Dec17 00:25] kauditd_printk_skb: 134 callbacks suppressed
	[Dec17 00:28] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.344537] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0] <==
	{"level":"warn","ts":"2025-12-17T00:21:35.657581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.667886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.672758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.684396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.706567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.715915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.760731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:22:44.130881Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T00:22:44.130978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	{"level":"error","ts":"2025-12-17T00:22:44.131075Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221581Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.221712Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"22872ffef731375a","current-leader-member-id":"22872ffef731375a"}
	{"level":"info","ts":"2025-12-17T00:22:44.221792Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T00:22:44.221802Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T00:22:44.221922Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222103Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222119Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222160Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225852Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"error","ts":"2025-12-17T00:22:44.225922Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225945Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2025-12-17T00:22:44.225950Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	
	
	==> etcd [6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630] <==
	{"level":"warn","ts":"2025-12-17T00:24:27.313293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.320616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.331004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.337374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.345255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.354619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.365619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.374882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.383058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.391645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.400215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.408059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.415237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.422074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.440217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.454486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.462999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.477140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.484854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.493591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.501969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.546639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:34:26.939994Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1007}
	{"level":"info","ts":"2025-12-17T00:34:26.965323Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1007,"took":"23.382262ms","hash":791219146,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1105920,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-12-17T00:34:26.965399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":791219146,"revision":1007,"compact-revision":-1}
	
	
	==> kernel <==
	 00:34:57 up 16 min,  0 users,  load average: 0.04, 0.14, 0.15
	Linux functional-698418 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972] <==
	I1217 00:24:28.277406       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 00:24:28.277444       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 00:24:28.277487       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 00:24:28.281042       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:24:28.281300       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.281352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 00:24:28.282140       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.282199       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.283545       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 00:24:28.292001       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:24:28.294125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:24:28.967261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:24:29.085463       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:24:30.326310       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:24:30.386614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:24:30.419123       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:24:30.426405       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:24:31.694480       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:24:31.744240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:28:41.476382       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.113.226"}
	I1217 00:28:45.404723       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.246.234"}
	I1217 00:28:45.460397       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:28:46.160211       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.111.117"}
	I1217 00:32:56.403257       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.89.136"}
	I1217 00:34:28.200355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1] <==
	I1217 00:21:39.058814       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063583       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063639       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063668       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064068       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064411       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064486       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064629       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064697       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064728       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064803       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064959       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.065112       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.066039       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.067219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:21:39.067679       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068598       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068694       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068784       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.069012       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.073643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154923       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154942       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:21:39.154946       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:21:39.168785       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692] <==
	I1217 00:24:31.472714       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472731       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472736       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474255       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474347       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.475217       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:24:31.475237       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:24:31.474360       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474366       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474425       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.475861       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:24:31.476235       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-698418"
	I1217 00:24:31.474433       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472707       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472720       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472725       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.476353       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:24:31.474449       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472698       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474439       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478289       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474444       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478338       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.541643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.751306       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab] <==
	I1217 00:22:50.913830       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:22:51.014579       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2] <==
	I1217 00:24:29.960125       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:24:30.061466       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:30.061553       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E1217 00:24:30.061659       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:24:30.113793       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 00:24:30.114041       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 00:24:30.114249       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:24:30.163601       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:24:30.164393       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:24:30.164446       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:24:30.181144       1 config.go:200] "Starting service config controller"
	I1217 00:24:30.185060       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:24:30.185270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:24:30.185303       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:24:30.185307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:24:30.181175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:24:30.198089       1 config.go:309] "Starting node config controller"
	I1217 00:24:30.198119       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:24:30.287379       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:24:30.289567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:24:30.289579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:24:30.298372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b] <==
	E1217 00:21:36.448305       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:21:36.449061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 00:21:36.449576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 00:21:36.453581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 00:21:36.448485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 00:21:36.449996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 00:21:36.450331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 00:21:36.451038       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 00:21:36.451413       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 00:21:36.451819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 00:21:36.453319       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 00:21:36.449614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 00:21:36.453890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 00:21:36.454295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 00:21:36.459695       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 00:21:36.459774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 00:21:36.459849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 00:21:36.460344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	I1217 00:21:39.389301       1 shared_informer.go:377] "Caches are synced"
	I1217 00:22:44.142866       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 00:22:44.142910       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 00:22:44.142920       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 00:22:44.142981       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:22:44.143070       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 00:22:44.143163       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.051965    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda712d99056792744476561e1a0361d20/crio-5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150: Error finding container 5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150: Status 404 returned error can't find the container with id 5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.052146    6659 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod380aa506-9f03-4398-8d13-ac938ed6953c/crio-6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f: Error finding container 6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f: Status 404 returned error can't find the container with id 6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.052620    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod2a9045876df478aae3a7b636723bc540/crio-d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285: Error finding container d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285: Status 404 returned error can't find the container with id d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.052838    6659 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod8aeec296-0f7d-489d-88c0-1a8f24bcdb27/crio-bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564: Error finding container bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564: Status 404 returned error can't find the container with id bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.053166    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod375f206a-98e8-4a86-b794-274b2ac5d46d/crio-167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706: Error finding container 167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706: Status 404 returned error can't find the container with id 167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.208590    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931665208146027  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.208634    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931665208146027  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:25 functional-698418 kubelet[6659]: E1217 00:34:25.937910    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-698418" containerName="etcd"
	Dec 17 00:34:30 functional-698418 kubelet[6659]: E1217 00:34:30.939080    6659 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-p2tnv" containerName="coredns"
	Dec 17 00:34:31 functional-698418 kubelet[6659]: E1217 00:34:31.938282    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:34:31 functional-698418 kubelet[6659]: E1217 00:34:31.951959    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:34:31 functional-698418 kubelet[6659]: E1217 00:34:31.952026    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:34:31 functional-698418 kubelet[6659]: E1217 00:34:31.952042    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:34:31 functional-698418 kubelet[6659]: E1217 00:34:31.952089    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:34:35 functional-698418 kubelet[6659]: E1217 00:34:35.210887    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931675210637289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:35 functional-698418 kubelet[6659]: E1217 00:34:35.210907    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931675210637289  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:45 functional-698418 kubelet[6659]: E1217 00:34:45.213950    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931685213190853  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:45 functional-698418 kubelet[6659]: E1217 00:34:45.213972    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931685213190853  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:46 functional-698418 kubelet[6659]: E1217 00:34:46.939169    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:34:46 functional-698418 kubelet[6659]: E1217 00:34:46.949453    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:34:46 functional-698418 kubelet[6659]: E1217 00:34:46.949587    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:34:46 functional-698418 kubelet[6659]: E1217 00:34:46.949605    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:34:46 functional-698418 kubelet[6659]: E1217 00:34:46.949658    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:34:55 functional-698418 kubelet[6659]: E1217 00:34:55.216332    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931695216043431  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:34:55 functional-698418 kubelet[6659]: E1217 00:34:55.216689    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931695216043431  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	
	
	==> storage-provisioner [25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446] <==
	I1217 00:22:51.076154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:22:51.080587       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216] <==
	W1217 00:34:32.409429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:34.412448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:34.421030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:36.424950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:36.430493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:38.433694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:38.438677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:40.442262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:40.447799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:42.451466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:42.461121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:44.465114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:44.470004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:46.473872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:46.478735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:48.481938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:48.492598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:50.496270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:50.501324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:52.504691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:52.513668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:54.516481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:54.521363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:56.525558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:34:56.541970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
helpers_test.go:270: (dbg) Run:  kubectl --context functional-698418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod
helpers_test.go:291: (dbg) kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7t2t (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p7t2t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-z5vc8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rkgzf (ro)
	Volumes:
	  kube-api-access-rkgzf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-n5xgg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj79f (ro)
	Volumes:
	  kube-api-access-pj79f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-m98rn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wznv (ro)
	Volumes:
	  kube-api-access-4wznv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsvrf (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsvrf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (369.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-698418 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-m98rn" [b44a2b8c-adee-46f1-98e5-e5e43a9c78b0] Pending
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-17 00:38:45.703365271 +0000 UTC m=+1952.406292392
functional_test.go:1804: (dbg) Run:  kubectl --context functional-698418 describe po mysql-7d7b65bc95-m98rn -n default
functional_test.go:1804: (dbg) kubectl --context functional-698418 describe po mysql-7d7b65bc95-m98rn -n default:
Name:             mysql-7d7b65bc95-m98rn
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=mysql
pod-template-hash=7d7b65bc95
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/mysql-7d7b65bc95
Containers:
mysql:
Image:      public.ecr.aws/docker/library/mysql:8.4
Port:       3306/TCP (mysql)
Host Port:  0/TCP (mysql)
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wznv (ro)
Volumes:
kube-api-access-4wznv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1804: (dbg) Run:  kubectl --context functional-698418 logs mysql-7d7b65bc95-m98rn -n default
functional_test.go:1804: (dbg) kubectl --context functional-698418 logs mysql-7d7b65bc95-m98rn -n default:
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-698418 -n functional-698418
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs -n 25: (1.397564383s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │                     │
	│ ssh       │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ ssh       │ functional-698418 ssh -- ls -la /mount-9p                                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ ssh       │ functional-698418 ssh cat /mount-9p/test-1765931331407799177                                                                                        │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:28 UTC │ 17 Dec 25 00:28 UTC │
	│ ssh       │ functional-698418 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-698418 ssh sudo umount -f /mount-9p                                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh       │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2996149672/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-698418 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh       │ functional-698418 ssh -- ls -la /mount-9p                                                                                                           │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh       │ functional-698418 ssh sudo umount -f /mount-9p                                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount1 --alsologtostderr -v=1                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount3 --alsologtostderr -v=1                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-698418 ssh findmnt -T /mount1                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ mount     │ -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount2 --alsologtostderr -v=1                │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ ssh       │ functional-698418 ssh findmnt -T /mount1                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh       │ functional-698418 ssh findmnt -T /mount2                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ ssh       │ functional-698418 ssh findmnt -T /mount3                                                                                                            │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ mount     │ -p functional-698418 --kill=true                                                                                                                    │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ addons    │ functional-698418 addons list                                                                                                                       │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ addons    │ functional-698418 addons list -o json                                                                                                               │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │ 17 Dec 25 00:32 UTC │
	│ start     │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-698418 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                   │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ start     │ -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-698418 --alsologtostderr -v=1                                                                                      │ functional-698418 │ jenkins │ v1.37.0 │ 17 Dec 25 00:34 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:34:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:34:58.017344   29429 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:58.017575   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017583   29429 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:58.017587   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017835   29429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:34:58.018256   29429 out.go:368] Setting JSON to false
	I1217 00:34:58.019096   29429 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4644,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:34:58.019154   29429 start.go:143] virtualization: kvm guest
	I1217 00:34:58.021244   29429 out.go:179] * [functional-698418] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:34:58.022849   29429 notify.go:221] Checking for updates...
	I1217 00:34:58.022887   29429 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:34:58.024449   29429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:34:58.025969   29429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:34:58.027336   29429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:34:58.028756   29429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:34:58.030134   29429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:34:58.031812   29429 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:34:58.032278   29429 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:34:58.062597   29429 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:34:58.063991   29429 start.go:309] selected driver: kvm2
	I1217 00:34:58.064002   29429 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:34:58.064121   29429 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:34:58.066098   29429 out.go:203] 
	W1217 00:34:58.067330   29429 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:34:58.068510   29429 out.go:203] 
	
	
	==> CRI-O <==
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.541946495Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a16202e5-d70b-498b-a468-71f1379c11c8 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.544135598Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d4d1ac0-6264-4e0a-94f5-7408ddc506f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.545217024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931926545191276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164169,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d4d1ac0-6264-4e0a-94f5-7408ddc506f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.546342090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fafca138-a4e3-4866-8cc4-91c02ab39c75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.546429834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fafca138-a4e3-4866-8cc4-91c02ab39c75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.546863173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fafca138-a4e3-4866-8cc4-91c02ab39c75 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.584446672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55f02435-77b2-41c3-a2cc-cda610fa2dcb name=/runtime.v1.RuntimeService/Version
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.584803585Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55f02435-77b2-41c3-a2cc-cda610fa2dcb name=/runtime.v1.RuntimeService/Version
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.586786153Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83c11791-493f-4560-9a8c-4d8bd14255ac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.588456201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931926588421149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164169,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83c11791-493f-4560-9a8c-4d8bd14255ac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.590160011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae9eca87-c44f-41c1-b3c3-a86bd4b0d025 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.590376656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae9eca87-c44f-41c1-b3c3-a86bd4b0d025 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.590826187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae9eca87-c44f-41c1-b3c3-a86bd4b0d025 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.615406689Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=93beae2f-65b9-4a6f-ad44-7939d39b2d7b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.615741545Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069417254183,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898956791Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system
,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069256290564,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\
":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:24:28.898955389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931069240460749,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:24:28.898961527Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2
a9045876df478aae3a7b636723bc540,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065589722246,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:24:24.906954007Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-698418,Uid:1ce7e9b17a3dd76e454bf214ca11d85f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765931065589245770,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.109:8441,kubernetes.io/config.hash: 1ce7e9b17a3dd76e454bf214ca11d85f,kubernetes.io/config.seen: 2025-12-17T00:24:24.906952914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:3,},State:SANDBOX_READY,CreatedAt:1765931065588791216,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a
712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:24:24.906945395Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&PodSandboxMetadata{Name:coredns-7d764666f9-p2tnv,Uid:375f206a-98e8-4a86-b794-274b2ac5d46d,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970318176395,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,k8s-app: kube-dns,pod-template-hash: 7d764666f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094813101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8aeec296-0f7d-489d-88c0-1a8f24bcdb27,Namespace:kube-system,Attempt:2,},Stat
e:SANDBOX_NOTREADY,CreatedAt:1765930970314314045,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"
/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-17T00:21:37.094811769Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&PodSandboxMetadata{Name:kube-proxy-qmz66,Uid:380aa506-9f03-4398-8d13-ac938ed6953c,Namespace:kube-system,Attempt:2,},State:SANDBOX_NOTREADY,CreatedAt:1765930970020012264,Labels:map[string]string{controller-revision-hash: 7bd5454df7,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-17T00:21:37.094809600Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-698418,Uid:2a9045876df4
78aae3a7b636723bc540,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768451249660,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2a9045876df478aae3a7b636723bc540,kubernetes.io/config.seen: 2025-12-17T00:18:37.825797754Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-698418,Uid:390b595ba70cd6ac1adab7b4d760d832,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768338346727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 390b595ba70cd6ac1adab7b4d760d832,kubernetes.io/config.seen: 2025-12-17T00:18:37.825801625Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&PodSandboxMetadata{Name:etcd-functional-698418,Uid:a712d99056792744476561e1a0361d20,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1765930768238266949,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.109:2379,kubernetes.io/config.hash: a712d99056792744476561e1a0361d20,kubernetes.io/config.seen: 2025-12-17T00:18:37.82580282
0Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=93beae2f-65b9-4a6f-ad44-7939d39b2d7b name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.617932754Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ef9c792-cd50-4ba7-9660-d8d0770b90d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.618006818Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ef9c792-cd50-4ba7-9660-d8d0770b90d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.618223693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ef9c792-cd50-4ba7-9660-d8d0770b90d6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.633614206Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fcd91dea-2dc7-4e06-bfa6-ad7b5b85b750 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.633727428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fcd91dea-2dc7-4e06-bfa6-ad7b5b85b750 name=/runtime.v1.RuntimeService/Version
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.636338664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fed95a7-686d-4169-8939-1897f2804a13 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.636985957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765931926636959969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:164169,},InodesUsed:&UInt64Value{Value:73,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fed95a7-686d-4169-8939-1897f2804a13 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.637971873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de7da9cb-0588-4243-b43a-56e9809ad8c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.638027702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de7da9cb-0588-4243-b43a-56e9809ad8c9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 00:38:46 functional-698418 crio[6296]: time="2025-12-17 00:38:46.638261903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de,PodSandboxId:45138ffa0bb91dd23f83c681b3b9a410d900a685f09db3e936c18d79bcc7ff70,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_RUNNING,CreatedAt:1765931069848478198,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2,PodSandboxId:1fbefd3a7421f756224811c63d74b6b380f3f61c9739cb3784937ab0f86c4362,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765931069582611906,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216,PodSandboxId:a09572a7b0786aac784d1d4cef2dc3002c196faae35dcd055e33dd7b23a301bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765931069585947824,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972,PodSandboxId:cb65bcef9b7f3551813702c3d2e3535cec74c3f21ef85ef8660fc3b5ed3ee837,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765931065819368164,Labels:map[string]string{io.kubernetes.container.name:
kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce7e9b17a3dd76e454bf214ca11d85f,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692,PodSandboxId:5a541b1e17042c254bf59885fe7f01bb4f3cb8682cf724a0d3dc11c806811a96,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d
6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765931065782123739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630,PodSandboxId:47788f5b505e2ab0b18e49d43a9d48485f0467d50546073946f22bc9d65190f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765931065773166649,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1,PodSandboxId:167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa
5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765930971276417771,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-p2tnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 375f206a-98e8-4a86-b794-274b2ac5d46d,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446,PodSandboxId:bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765930970784659776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeec296-0f7d-489d-88c0-1a8f24bcdb27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubern
etes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab,PodSandboxId:6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765930970517125898,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qmz66,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 380aa506-9f03-4398-8d13-ac938ed6953c,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessageP
olicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0,PodSandboxId:5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765930894126083795,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a712d99056792744476561e1a0361d20,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1,PodSandboxId:d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765930894091115325,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a9045876df478aae3a7b636723bc540,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"host
Port\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b,PodSandboxId:affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765930894095856281,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-698418,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b595ba70cd6ac1adab7b4d760d832,},Annotations:map[string]
string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de7da9cb-0588-4243-b43a-56e9809ad8c9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b5f5abaf95cb2       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   14 minutes ago      Running             coredns                   3                   45138ffa0bb91       coredns-7d764666f9-p2tnv                    kube-system
	eeabbda62f1da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       3                   a09572a7b0786       storage-provisioner                         kube-system
	8b30d8f3ed892       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   14 minutes ago      Running             kube-proxy                3                   1fbefd3a7421f       kube-proxy-qmz66                            kube-system
	62c820b9f36a9       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b   14 minutes ago      Running             kube-apiserver            0                   cb65bcef9b7f3       kube-apiserver-functional-698418            kube-system
	7787544c3b26b       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   14 minutes ago      Running             kube-controller-manager   3                   5a541b1e17042       kube-controller-manager-functional-698418   kube-system
	6305bb233aef9       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   14 minutes ago      Running             etcd                      3                   47788f5b505e2       etcd-functional-698418                      kube-system
	fe278b7670e03       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139   15 minutes ago      Exited              coredns                   2                   167bfbac01f7e       coredns-7d764666f9-p2tnv                    kube-system
	25dad2630a2cc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Exited              storage-provisioner       2                   bca5c63a70f55       storage-provisioner                         kube-system
	6da27c7e1968f       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810   15 minutes ago      Exited              kube-proxy                2                   6e560eef5590b       kube-proxy-qmz66                            kube-system
	4ccf8afdca857       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   17 minutes ago      Exited              etcd                      2                   5b1069943f833       etcd-functional-698418                      kube-system
	95a7023d7b964       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46   17 minutes ago      Exited              kube-scheduler            2                   affe536f1f44e       kube-scheduler-functional-698418            kube-system
	089ad298c6676       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc   17 minutes ago      Exited              kube-controller-manager   2                   d4cac43b8d396       kube-controller-manager-functional-698418   kube-system
	
	
	==> coredns [b5f5abaf95cb222939db1aa90e240c94409e5d233b67a38091d26213a9ba10de] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:53680 - 57417 "HINFO IN 5216687169014558221.4342564943848837697. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019890703s
	
	
	==> coredns [fe278b7670e033003fe8649b2824cf6b6516ce6b4b51066c8b25075f6c74ecd1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:35643 - 58571 "HINFO IN 8723388857180390004.4112128438720857375. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021681051s
	
	
	==> describe nodes <==
	Name:               functional-698418
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-698418
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=functional-698418
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T00_18_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 00:18:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-698418
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 00:38:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 00:34:09 +0000   Wed, 17 Dec 2025 00:18:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.109
	  Hostname:    functional-698418
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dd443fff9c14f00b485986b75d25594
	  System UUID:                4dd443ff-f9c1-4f00-b485-986b75d25594
	  Boot ID:                    cfa996e4-9a58-45c9-b4e8-fda78786a8ea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-p2tnv                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     20m
	  kube-system                 etcd-functional-698418                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         20m
	  kube-system                 kube-apiserver-functional-698418             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-functional-698418    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-qmz66                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-functional-698418             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  20m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  17m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	  Normal  RegisteredNode  14m   node-controller  Node functional-698418 event: Registered Node functional-698418 in Controller
	
	
	==> dmesg <==
	[Dec17 00:18] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001752] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001838] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.180772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.097371] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.150666] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.082933] kauditd_printk_skb: 18 callbacks suppressed
	[  +3.378145] kauditd_printk_skb: 296 callbacks suppressed
	[Dec17 00:19] kauditd_printk_skb: 350 callbacks suppressed
	[Dec17 00:21] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.190827] kauditd_printk_skb: 57 callbacks suppressed
	[Dec17 00:22] kauditd_printk_skb: 12 callbacks suppressed
	[Dec17 00:24] kauditd_printk_skb: 254 callbacks suppressed
	[  +4.295954] kauditd_printk_skb: 154 callbacks suppressed
	[Dec17 00:25] kauditd_printk_skb: 134 callbacks suppressed
	[Dec17 00:28] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.344537] kauditd_printk_skb: 14 callbacks suppressed
	[Dec17 00:34] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [4ccf8afdca857a06f5fafcae2d2203299adc9758a4deda7044a39b8fa9c78cf0] <==
	{"level":"warn","ts":"2025-12-17T00:21:35.657581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.667886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.672758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.684396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.706567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.715915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:21:35.760731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:22:44.130881Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-17T00:22:44.130978Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	{"level":"error","ts":"2025-12-17T00:22:44.131075Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221581Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-17T00:22:44.221679Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.221712Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"22872ffef731375a","current-leader-member-id":"22872ffef731375a"}
	{"level":"info","ts":"2025-12-17T00:22:44.221792Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-17T00:22:44.221802Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-17T00:22:44.221922Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222028Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222046Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222103Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-17T00:22:44.222119Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.109:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-17T00:22:44.222160Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225852Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"error","ts":"2025-12-17T00:22:44.225922Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.109:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-17T00:22:44.225945Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.109:2380"}
	{"level":"info","ts":"2025-12-17T00:22:44.225950Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-698418","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.109:2380"],"advertise-client-urls":["https://192.168.39.109:2379"]}
	
	
	==> etcd [6305bb233aef9c18d4ca9912e6cec1535112b9389a53466a2ae441f3d1c8b630] <==
	{"level":"warn","ts":"2025-12-17T00:24:27.313293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.320616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.331004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.337374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.345255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.354619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.365619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.374882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.383058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.391645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.400215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.408059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.415237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.422074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.440217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.454486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.462999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.477140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.484854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.493591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.501969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T00:24:27.546639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T00:34:26.939994Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1007}
	{"level":"info","ts":"2025-12-17T00:34:26.965323Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1007,"took":"23.382262ms","hash":791219146,"current-db-size-bytes":2891776,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1105920,"current-db-size-in-use":"1.1 MB"}
	{"level":"info","ts":"2025-12-17T00:34:26.965399Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":791219146,"revision":1007,"compact-revision":-1}
	
	
	==> kernel <==
	 00:38:47 up 20 min,  0 users,  load average: 0.11, 0.11, 0.13
	Linux functional-698418 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [62c820b9f36a95750703dfbcacf0065cf51309e962dd5c6e6ba051e32abbd972] <==
	I1217 00:24:28.281042       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 00:24:28.281300       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.281352       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 00:24:28.282140       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.282199       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:28.283545       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 00:24:28.292001       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 00:24:28.294125       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 00:24:28.967261       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 00:24:29.085463       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 00:24:30.326310       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 00:24:30.386614       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 00:24:30.419123       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 00:24:30.426405       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 00:24:31.694480       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 00:24:31.744240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 00:28:41.476382       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.113.226"}
	I1217 00:28:45.404723       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.109.246.234"}
	I1217 00:28:45.460397       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 00:28:46.160211       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.111.117"}
	I1217 00:32:56.403257       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.89.136"}
	I1217 00:34:28.200355       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 00:34:58.933078       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 00:34:59.188964       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.88.77"}
	I1217 00:34:59.214738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.7.189"}
	
	
	==> kube-controller-manager [089ad298c6676a699fe577e2d5d71b0c488b5a9d3e33173bdc20901711843fd1] <==
	I1217 00:21:39.058814       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063583       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063639       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.063668       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064068       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064411       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064486       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064629       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064697       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064728       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064803       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.064959       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.065112       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.066039       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.067219       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:21:39.067679       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068598       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068694       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.068784       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.069012       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.073643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154923       1 shared_informer.go:377] "Caches are synced"
	I1217 00:21:39.154942       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 00:21:39.154946       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 00:21:39.168785       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [7787544c3b26bf83a9e4d701ebadf14ddc0d6c98ed581e4c9a78f8a3d5809692] <==
	I1217 00:24:31.474366       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474425       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.475861       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 00:24:31.476235       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-698418"
	I1217 00:24:31.474433       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472707       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472720       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472725       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.476353       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 00:24:31.474449       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.472698       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474439       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478289       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.474444       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.478338       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.541643       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:31.751306       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1217 00:34:59.037812       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.048787       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.057113       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.065963       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.074857       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.074954       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.083829       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1217 00:34:59.091457       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6da27c7e1968f00886d58afd572464f1215b610d6f810eae1693c39394b159ab] <==
	I1217 00:22:50.913830       1 server_linux.go:53] "Using iptables proxy"
	I1217 00:22:51.014579       1 shared_informer.go:370] "Waiting for caches to sync"
	
	
	==> kube-proxy [8b30d8f3ed892ad639fe78bbed50c837c0bbb626952234350ebf7fa8426e1db2] <==
	I1217 00:24:29.960125       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 00:24:30.061466       1 shared_informer.go:377] "Caches are synced"
	I1217 00:24:30.061553       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.109"]
	E1217 00:24:30.061659       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 00:24:30.113793       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 00:24:30.114041       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 00:24:30.114249       1 server_linux.go:136] "Using iptables Proxier"
	I1217 00:24:30.163601       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 00:24:30.164393       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1217 00:24:30.164446       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 00:24:30.181144       1 config.go:200] "Starting service config controller"
	I1217 00:24:30.185060       1 config.go:106] "Starting endpoint slice config controller"
	I1217 00:24:30.185270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 00:24:30.185303       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 00:24:30.185307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 00:24:30.181175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 00:24:30.198089       1 config.go:309] "Starting node config controller"
	I1217 00:24:30.198119       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 00:24:30.287379       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 00:24:30.289567       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 00:24:30.289579       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 00:24:30.298372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [95a7023d7b96443435118fb950ed0496e6a19ddc94f55a8e252d4ea702aa939b] <==
	E1217 00:21:36.448305       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope"
	E1217 00:21:36.449061       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 00:21:36.449576       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 00:21:36.453581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 00:21:36.448485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 00:21:36.449996       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 00:21:36.450331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 00:21:36.451038       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 00:21:36.451413       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 00:21:36.451819       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 00:21:36.453319       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 00:21:36.449614       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 00:21:36.453890       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1217 00:21:36.454295       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 00:21:36.459695       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 00:21:36.459774       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 00:21:36.459849       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 00:21:36.460344       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	I1217 00:21:39.389301       1 shared_informer.go:377] "Caches are synced"
	I1217 00:22:44.142866       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1217 00:22:44.142910       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1217 00:22:44.142920       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1217 00:22:44.142981       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 00:22:44.143070       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1217 00:22:44.143163       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 17 00:38:08 functional-698418 kubelet[6659]: E1217 00:38:08.950654    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:38:15 functional-698418 kubelet[6659]: E1217 00:38:15.268581    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931895268244795  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:15 functional-698418 kubelet[6659]: E1217 00:38:15.268677    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931895268244795  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:22 functional-698418 kubelet[6659]: E1217 00:38:22.937882    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:38:22 functional-698418 kubelet[6659]: E1217 00:38:22.951488    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:38:22 functional-698418 kubelet[6659]: E1217 00:38:22.951672    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:38:22 functional-698418 kubelet[6659]: E1217 00:38:22.951689    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:38:22 functional-698418 kubelet[6659]: E1217 00:38:22.951792    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.051227    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda712d99056792744476561e1a0361d20/crio-5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150: Error finding container 5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150: Status 404 returned error can't find the container with id 5b1069943f833c3bb09b24a2a7a536da2fcef186396d925dddd710e4b8fa9150
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.051552    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod2a9045876df478aae3a7b636723bc540/crio-d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285: Error finding container d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285: Status 404 returned error can't find the container with id d4cac43b8d39630edc4155c4315349ac5c82c0155fc2675a8a0df421a0b3e285
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.051997    6659 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod380aa506-9f03-4398-8d13-ac938ed6953c/crio-6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f: Error finding container 6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f: Status 404 returned error can't find the container with id 6e560eef5590b64b6701674c0b23f8ba8f5ae436c11a5b33c8b8479b163a789f
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.052334    6659 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod8aeec296-0f7d-489d-88c0-1a8f24bcdb27/crio-bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564: Error finding container bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564: Status 404 returned error can't find the container with id bca5c63a70f5569e938857e5287ac12f75220866ec46e3511d6af97931577564
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.052666    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod375f206a-98e8-4a86-b794-274b2ac5d46d/crio-167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706: Error finding container 167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706: Status 404 returned error can't find the container with id 167bfbac01f7e4c703b6085e2349b7dc06d514f766c93cf3cfe7e69d8a32a706
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.052934    6659 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod390b595ba70cd6ac1adab7b4d760d832/crio-affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd: Error finding container affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd: Status 404 returned error can't find the container with id affe536f1f44e4a5eec045150c989f9cf8613992cd1beaf52e0734244ce8c1bd
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.270217    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931905269966992  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:25 functional-698418 kubelet[6659]: E1217 00:38:25.270575    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931905269966992  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:35 functional-698418 kubelet[6659]: E1217 00:38:35.272290    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931915271810566  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:35 functional-698418 kubelet[6659]: E1217 00:38:35.272311    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931915271810566  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:36 functional-698418 kubelet[6659]: E1217 00:38:36.938464    6659 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-698418" containerName="kube-scheduler"
	Dec 17 00:38:36 functional-698418 kubelet[6659]: E1217 00:38:36.949672    6659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists"
	Dec 17 00:38:36 functional-698418 kubelet[6659]: E1217 00:38:36.949756    6659 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:38:36 functional-698418 kubelet[6659]: E1217 00:38:36.949772    6659 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\" already exists" pod="kube-system/kube-scheduler-functional-698418"
	Dec 17 00:38:36 functional-698418 kubelet[6659]: E1217 00:38:36.949835    6659 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-functional-698418_kube-system(390b595ba70cd6ac1adab7b4d760d832)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-functional-698418_kube-system_390b595ba70cd6ac1adab7b4d760d832_2\\\" already exists\"" pod="kube-system/kube-scheduler-functional-698418" podUID="390b595ba70cd6ac1adab7b4d760d832"
	Dec 17 00:38:45 functional-698418 kubelet[6659]: E1217 00:38:45.273615    6659 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765931925273271556  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	Dec 17 00:38:45 functional-698418 kubelet[6659]: E1217 00:38:45.273652    6659 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765931925273271556  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:164169}  inodes_used:{value:73}}"
	
	
	==> storage-provisioner [25dad2630a2cc7b34ac11027a3984e04a6f04faeb4aea873980cbc58aa6ad446] <==
	I1217 00:22:51.076154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 00:22:51.080587       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [eeabbda62f1da3d54be70bfe16b34f07c8e8a53a40811a81f10481ac9ce44216] <==
	W1217 00:38:21.634320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:23.637263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:23.644164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:25.648131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:25.653761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:27.656967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:27.662347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:29.665115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:29.670147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:31.673934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:31.679599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:33.683913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:33.693204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:35.697373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:35.703273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:37.707661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:37.717213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:39.720659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:39.726453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:41.730777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:41.735978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:43.740340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:43.750139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:45.754188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 00:38:45.759905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
helpers_test.go:270: (dbg) Run:  kubectl --context functional-698418 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k: exit status 1 (121.081893ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    Environment:  <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7t2t (ro)
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p7t2t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-5758569b79-z5vc8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rkgzf (ro)
	Volumes:
	  kube-api-access-rkgzf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             hello-node-connect-9f67c86d4-n5xgg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Image:        kicbase/echo-server
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pj79f (ro)
	Volumes:
	  kube-api-access-pj79f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             mysql-7d7b65bc95-m98rn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Image:      public.ecr.aws/docker/library/mysql:8.4
	    Port:       3306/TCP (mysql)
	    Host Port:  0/TCP (mysql)
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wznv (ro)
	Volumes:
	  kube-api-access-4wznv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        public.ecr.aws/nginx/nginx:alpine
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsvrf (ro)
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsvrf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-wpflw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-6dx8k" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-698418 describe pod busybox-mount hello-node-5758569b79-z5vc8 hello-node-connect-9f67c86d4-n5xgg mysql-7d7b65bc95-m98rn sp-pod dashboard-metrics-scraper-5565989548-wpflw kubernetes-dashboard-b84665fb8-6dx8k: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-698418 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-698418 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-z5vc8" [084dd2ad-2bcc-439c-8c71-9538842003cc] Pending
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-17 00:38:46.426338495 +0000 UTC m=+1953.129265611
functional_test.go:1460: (dbg) Run:  kubectl --context functional-698418 describe po hello-node-5758569b79-z5vc8 -n default
functional_test.go:1460: (dbg) kubectl --context functional-698418 describe po hello-node-5758569b79-z5vc8 -n default:
Name:             hello-node-5758569b79-z5vc8
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Image:        kicbase/echo-server
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rkgzf (ro)
Volumes:
kube-api-access-rkgzf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test.go:1460: (dbg) Run:  kubectl --context functional-698418 logs hello-node-5758569b79-z5vc8 -n default
functional_test.go:1460: (dbg) kubectl --context functional-698418 logs hello-node-5758569b79-z5vc8 -n default:
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (242.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765931331407799177" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765931331407799177" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765931331407799177" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001/test-1765931331407799177
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.991952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:28:51.566099   17074 retry.go:31] will retry after 730.81496ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:28 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:28 test-1765931331407799177
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh cat /mount-9p/test-1765931331407799177
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-698418 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [c7e5d0e9-f69f-4188-90c0-ff58c38fc1e7] Pending
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: WARNING: pod list for "default" "integration-test=busybox-mount" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_mount_test.go:153: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: pod "integration-test=busybox-mount" failed to start within 4m0s: context deadline exceeded ****
functional_test_mount_test.go:153: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-698418 -n functional-698418
functional_test_mount_test.go:153: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: showing logs for failed pods as of 2025-12-17 00:32:53.119964759 +0000 UTC m=+1599.822891861
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-698418 describe po busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-698418 describe po busybox-mount -n default:
Name:             busybox-mount
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           integration-test=busybox-mount
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
mount-munger:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
/bin/sh
-c
--
Args:
cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
Environment:  <none>
Mounts:
/mount-9p from test-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7t2t (ro)
Volumes:
test-volume:
Type:          HostPath (bare host directory volume)
Path:          /mount-9p
HostPathType:  
kube-api-access-p7t2t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
functional_test_mount_test.go:153: (dbg) Run:  kubectl --context functional-698418 logs busybox-mount -n default
functional_test_mount_test.go:153: (dbg) kubectl --context functional-698418 logs busybox-mount -n default:
functional_test_mount_test.go:154: failed waiting for busybox-mount pod: integration-test=busybox-mount within 4m0s: context deadline exceeded
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (164.945852ms)

                                                
                                                
-- stdout --
	192.168.39.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=1000,access=any,msize=262144,trans=tcp,noextend,port=44653)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 17 00:28 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 17 00:28 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 17 00:28 test-1765931331407799177
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-698418 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.39.1:44653
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001:/mount-9p --alsologtostderr -v=1] stderr:
I1217 00:28:51.467013   27756 out.go:360] Setting OutFile to fd 1 ...
I1217 00:28:51.467153   27756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:28:51.467161   27756 out.go:374] Setting ErrFile to fd 2...
I1217 00:28:51.467165   27756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:28:51.467356   27756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:28:51.467576   27756 mustload.go:66] Loading cluster: functional-698418
I1217 00:28:51.467864   27756 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:28:51.470363   27756 host.go:66] Checking if "functional-698418" exists ...
I1217 00:28:51.473578   27756 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:28:51.474245   27756 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:28:51.474280   27756 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:28:51.477302   27756 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001 into VM as /mount-9p ...
I1217 00:28:51.479103   27756 out.go:179]   - Mount type:   9p
I1217 00:28:51.480839   27756 out.go:179]   - User ID:      docker
I1217 00:28:51.482318   27756 out.go:179]   - Group ID:     docker
I1217 00:28:51.483797   27756 out.go:179]   - Version:      9p2000.L
I1217 00:28:51.485288   27756 out.go:179]   - Message Size: 262144
I1217 00:28:51.489649   27756 out.go:179]   - Options:      map[]
I1217 00:28:51.491185   27756 out.go:179]   - Bind Address: 192.168.39.1:44653
I1217 00:28:51.492795   27756 out.go:179] * Userspace file server: 
I1217 00:28:51.492865   27756 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 00:28:51.495989   27756 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:28:51.496428   27756 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:28:51.496455   27756 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:28:51.496644   27756 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:28:51.580863   27756 mount.go:180] unmount for /mount-9p ran successfully
I1217 00:28:51.580918   27756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1217 00:28:51.594817   27756 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=44653,trans=tcp,version=9p2000.L 192.168.39.1 /mount-9p"
I1217 00:28:51.629332   27756 main.go:127] stdlog: ufs.go:141 connected
I1217 00:28:51.629526   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tversion tag 65535 msize 262144 version '9P2000.L'
I1217 00:28:51.629637   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rversion tag 65535 msize 262144 version '9P2000'
I1217 00:28:51.630290   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1217 00:28:51.630402   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rattach tag 0 aqid (20fa0b9 29b5af4e 'd')
I1217 00:28:51.631173   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 0
I1217 00:28:51.631329   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0b9 29b5af4e 'd') m d775 at 0 mt 1765931331 l 4096 t 0 d 0 ext )
I1217 00:28:51.631694   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 0
I1217 00:28:51.631817   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0b9 29b5af4e 'd') m d775 at 0 mt 1765931331 l 4096 t 0 d 0 ext )
I1217 00:28:51.633261   27756 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/.mount-process: {Name:mk5bb07b83085179dda6ad9b00e21c1085523231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:28:51.633481   27756 mount.go:105] mount successful: ""
I1217 00:28:51.635371   27756 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1769150467/001 to /mount-9p
I1217 00:28:51.636804   27756 out.go:203] 
I1217 00:28:51.638314   27756 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1217 00:28:52.589700   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 0
I1217 00:28:52.589849   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0b9 29b5af4e 'd') m d775 at 0 mt 1765931331 l 4096 t 0 d 0 ext )
I1217 00:28:52.591856   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 1 
I1217 00:28:52.591904   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 
I1217 00:28:52.592133   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Topen tag 0 fid 1 mode 0
I1217 00:28:52.592198   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Ropen tag 0 qid (20fa0b9 29b5af4e 'd') iounit 0
I1217 00:28:52.592541   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 0
I1217 00:28:52.592640   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0b9 29b5af4e 'd') m d775 at 0 mt 1765931331 l 4096 t 0 d 0 ext )
I1217 00:28:52.593068   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 0 count 262120
I1217 00:28:52.593260   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 258
I1217 00:28:52.593538   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 258 count 261862
I1217 00:28:52.593587   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:28:52.593806   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 258 count 262120
I1217 00:28:52.593837   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:28:52.594065   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 00:28:52.594131   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bb 29b5af4e '') 
I1217 00:28:52.594415   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.594499   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bb 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.594792   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.594898   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bb 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.595086   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.595111   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.595667   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 00:28:52.595721   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bb 29b5af4e '') 
I1217 00:28:52.596028   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.596112   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bb 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.596451   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.596504   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.596848   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'test-1765931331407799177' 
I1217 00:28:52.596900   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bc 29b5af4e '') 
I1217 00:28:52.597185   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.597302   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.597505   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.597595   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.597935   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.597970   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.598219   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'test-1765931331407799177' 
I1217 00:28:52.598250   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bc 29b5af4e '') 
I1217 00:28:52.598713   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.598830   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.599084   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.599113   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.599407   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 00:28:52.599468   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0ba 29b5af4e '') 
I1217 00:28:52.599668   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.599747   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0ba 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.599968   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.600090   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0ba 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.600440   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.600467   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.600681   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 00:28:52.600725   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0ba 29b5af4e '') 
I1217 00:28:52.600958   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:28:52.601054   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0ba 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.601365   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.601413   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.601717   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 258 count 262120
I1217 00:28:52.601761   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:28:52.602066   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 1
I1217 00:28:52.602112   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.756819   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 1 0:'test-1765931331407799177' 
I1217 00:28:52.756883   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bc 29b5af4e '') 
I1217 00:28:52.757244   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 1
I1217 00:28:52.757401   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.757683   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 1 newfid 2 
I1217 00:28:52.757718   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 
I1217 00:28:52.757969   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Topen tag 0 fid 2 mode 0
I1217 00:28:52.758057   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Ropen tag 0 qid (20fa0bc 29b5af4e '') iounit 0
I1217 00:28:52.758372   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 1
I1217 00:28:52.758479   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:28:52.758858   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 2 offset 0 count 262120
I1217 00:28:52.758931   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 24
I1217 00:28:52.759397   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 2 offset 24 count 262120
I1217 00:28:52.759436   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:28:52.759714   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 2 offset 24 count 262120
I1217 00:28:52.759745   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:28:52.760128   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:28:52.760171   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:28:52.760471   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 1
I1217 00:28:52.760514   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.391044   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 0
I1217 00:32:53.391650   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0b9 29b5af4e 'd') m d775 at 0 mt 1765931331 l 4096 t 0 d 0 ext )
I1217 00:32:53.394135   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 1 
I1217 00:32:53.394229   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 
I1217 00:32:53.394657   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Topen tag 0 fid 1 mode 0
I1217 00:32:53.394730   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Ropen tag 0 qid (20fa0b9 29b5af4e 'd') iounit 0
I1217 00:32:53.395295   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 0
I1217 00:32:53.395412   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa0b9 29b5af4e 'd') m d775 at 0 mt 1765931331 l 4096 t 0 d 0 ext )
I1217 00:32:53.395862   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 0 count 262120
I1217 00:32:53.396060   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 258
I1217 00:32:53.397286   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 258 count 261862
I1217 00:32:53.397322   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:32:53.397581   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 258 count 262120
I1217 00:32:53.397610   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:32:53.397903   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 00:32:53.397943   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bb 29b5af4e '') 
I1217 00:32:53.398256   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.398348   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bb 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.398646   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.398722   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bb 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.398968   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:32:53.398993   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.399331   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1217 00:32:53.399371   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bb 29b5af4e '') 
I1217 00:32:53.399645   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.399719   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa0bb 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.399957   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:32:53.399980   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.400268   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'test-1765931331407799177' 
I1217 00:32:53.400314   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bc 29b5af4e '') 
I1217 00:32:53.400532   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.400607   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.400820   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.400906   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.401487   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:32:53.401511   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.401890   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'test-1765931331407799177' 
I1217 00:32:53.401946   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0bc 29b5af4e '') 
I1217 00:32:53.402238   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.402313   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('test-1765931331407799177' 'jenkins' 'balintp' '' q (20fa0bc 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.402524   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:32:53.402554   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.402698   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 00:32:53.402737   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0ba 29b5af4e '') 
I1217 00:32:53.402869   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.402960   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0ba 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.403128   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.403199   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0ba 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.403364   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:32:53.403397   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.403541   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1217 00:32:53.403577   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rwalk tag 0 (20fa0ba 29b5af4e '') 
I1217 00:32:53.403745   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tstat tag 0 fid 2
I1217 00:32:53.403837   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa0ba 29b5af4e '') m 644 at 0 mt 1765931331 l 24 t 0 d 0 ext )
I1217 00:32:53.403983   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 2
I1217 00:32:53.404011   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.404271   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tread tag 0 fid 1 offset 258 count 262120
I1217 00:32:53.404307   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rread tag 0 count 0
I1217 00:32:53.404462   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 1
I1217 00:32:53.404500   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.407264   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1217 00:32:53.407336   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rerror tag 0 ename 'file not found' ecode 0
I1217 00:32:53.567480   27756 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.39.109:41542 Tclunk tag 0 fid 0
I1217 00:32:53.567541   27756 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.39.109:41542 Rclunk tag 0
I1217 00:32:53.568628   27756 main.go:127] stdlog: ufs.go:147 disconnected
I1217 00:32:53.585007   27756 out.go:179] * Unmounting /mount-9p ...
I1217 00:32:53.586530   27756 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1217 00:32:53.595934   27756 mount.go:180] unmount for /mount-9p ran successfully
I1217 00:32:53.596082   27756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/.mount-process: {Name:mk5bb07b83085179dda6ad9b00e21c1085523231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 00:32:53.597835   27756 out.go:203] 
W1217 00:32:53.599084   27756 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1217 00:32:53.600304   27756 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (242.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 service --namespace=default --https --url hello-node: exit status 115 (266.929686ms)

                                                
                                                
-- stdout --
	https://192.168.39.109:30947
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-698418 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 service hello-node --url --format={{.IP}}: exit status 115 (271.699262ms)

                                                
                                                
-- stdout --
	192.168.39.109
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-698418 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 service hello-node --url: exit status 115 (239.811551ms)

                                                
                                                
-- stdout --
	http://192.168.39.109:30947
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-698418 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.109:30947
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestPreload (141.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-634039 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1217 01:20:07.735180   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-634039 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m26.202400765s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-634039 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-634039 image pull gcr.io/k8s-minikube/busybox: (2.407268049s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-634039
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-634039: (7.054822797s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-634039 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-634039 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.431175677s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-634039 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-17 01:21:58.29952369 +0000 UTC m=+4545.002450794
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-634039 -n test-preload-634039
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-634039 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p test-preload-634039 logs -n 25: (1.023849466s)
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-412026 ssh -n multinode-412026-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ ssh     │ multinode-412026 ssh -n multinode-412026 sudo cat /home/docker/cp-test_multinode-412026-m03_multinode-412026.txt                                          │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ cp      │ multinode-412026 cp multinode-412026-m03:/home/docker/cp-test.txt multinode-412026-m02:/home/docker/cp-test_multinode-412026-m03_multinode-412026-m02.txt │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ ssh     │ multinode-412026 ssh -n multinode-412026-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ ssh     │ multinode-412026 ssh -n multinode-412026-m02 sudo cat /home/docker/cp-test_multinode-412026-m03_multinode-412026-m02.txt                                  │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ node    │ multinode-412026 node stop m03                                                                                                                            │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ node    │ multinode-412026 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:09 UTC │
	│ node    │ list -p multinode-412026                                                                                                                                  │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │                     │
	│ stop    │ -p multinode-412026                                                                                                                                       │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:09 UTC │ 17 Dec 25 01:12 UTC │
	│ start   │ -p multinode-412026 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:12 UTC │ 17 Dec 25 01:14 UTC │
	│ node    │ list -p multinode-412026                                                                                                                                  │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:14 UTC │                     │
	│ node    │ multinode-412026 node delete m03                                                                                                                          │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:14 UTC │ 17 Dec 25 01:14 UTC │
	│ stop    │ multinode-412026 stop                                                                                                                                     │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:14 UTC │ 17 Dec 25 01:17 UTC │
	│ start   │ -p multinode-412026 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:17 UTC │ 17 Dec 25 01:18 UTC │
	│ node    │ list -p multinode-412026                                                                                                                                  │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:18 UTC │                     │
	│ start   │ -p multinode-412026-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-412026-m02 │ jenkins │ v1.37.0 │ 17 Dec 25 01:18 UTC │                     │
	│ start   │ -p multinode-412026-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-412026-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 01:18 UTC │ 17 Dec 25 01:19 UTC │
	│ node    │ add -p multinode-412026                                                                                                                                   │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:19 UTC │                     │
	│ delete  │ -p multinode-412026-m03                                                                                                                                   │ multinode-412026-m03 │ jenkins │ v1.37.0 │ 17 Dec 25 01:19 UTC │ 17 Dec 25 01:19 UTC │
	│ delete  │ -p multinode-412026                                                                                                                                       │ multinode-412026     │ jenkins │ v1.37.0 │ 17 Dec 25 01:19 UTC │ 17 Dec 25 01:19 UTC │
	│ start   │ -p test-preload-634039 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-634039  │ jenkins │ v1.37.0 │ 17 Dec 25 01:19 UTC │ 17 Dec 25 01:21 UTC │
	│ image   │ test-preload-634039 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-634039  │ jenkins │ v1.37.0 │ 17 Dec 25 01:21 UTC │ 17 Dec 25 01:21 UTC │
	│ stop    │ -p test-preload-634039                                                                                                                                    │ test-preload-634039  │ jenkins │ v1.37.0 │ 17 Dec 25 01:21 UTC │ 17 Dec 25 01:21 UTC │
	│ start   │ -p test-preload-634039 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-634039  │ jenkins │ v1.37.0 │ 17 Dec 25 01:21 UTC │ 17 Dec 25 01:21 UTC │
	│ image   │ test-preload-634039 image list                                                                                                                            │ test-preload-634039  │ jenkins │ v1.37.0 │ 17 Dec 25 01:21 UTC │ 17 Dec 25 01:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:21:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:21:14.730546   47519 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:21:14.730770   47519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:21:14.730779   47519 out.go:374] Setting ErrFile to fd 2...
	I1217 01:21:14.730783   47519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:21:14.730957   47519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:21:14.731413   47519 out.go:368] Setting JSON to false
	I1217 01:21:14.732261   47519 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7421,"bootTime":1765927054,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 01:21:14.732311   47519 start.go:143] virtualization: kvm guest
	I1217 01:21:14.734367   47519 out.go:179] * [test-preload-634039] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 01:21:14.736068   47519 notify.go:221] Checking for updates...
	I1217 01:21:14.736131   47519 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:21:14.737391   47519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:21:14.738756   47519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:21:14.740046   47519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 01:21:14.741255   47519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 01:21:14.742706   47519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:21:14.744443   47519 config.go:182] Loaded profile config "test-preload-634039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:21:14.744912   47519 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:21:14.778584   47519 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 01:21:14.779795   47519 start.go:309] selected driver: kvm2
	I1217 01:21:14.779808   47519 start.go:927] validating driver "kvm2" against &{Name:test-preload-634039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-634039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:21:14.779914   47519 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:21:14.780898   47519 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:21:14.780931   47519 cni.go:84] Creating CNI manager for ""
	I1217 01:21:14.780993   47519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:21:14.781070   47519 start.go:353] cluster config:
	{Name:test-preload-634039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-634039 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:21:14.781188   47519 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:21:14.782815   47519 out.go:179] * Starting "test-preload-634039" primary control-plane node in "test-preload-634039" cluster
	I1217 01:21:14.784232   47519 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:21:14.784261   47519 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 01:21:14.784270   47519 cache.go:65] Caching tarball of preloaded images
	I1217 01:21:14.784333   47519 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 01:21:14.784344   47519 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:21:14.784433   47519 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/config.json ...
	I1217 01:21:14.784634   47519 start.go:360] acquireMachinesLock for test-preload-634039: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 01:21:14.784674   47519 start.go:364] duration metric: took 24.044µs to acquireMachinesLock for "test-preload-634039"
	I1217 01:21:14.784688   47519 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:21:14.784693   47519 fix.go:54] fixHost starting: 
	I1217 01:21:14.786590   47519 fix.go:112] recreateIfNeeded on test-preload-634039: state=Stopped err=<nil>
	W1217 01:21:14.786609   47519 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:21:14.788851   47519 out.go:252] * Restarting existing kvm2 VM for "test-preload-634039" ...
	I1217 01:21:14.788886   47519 main.go:143] libmachine: starting domain...
	I1217 01:21:14.788895   47519 main.go:143] libmachine: ensuring networks are active...
	I1217 01:21:14.789792   47519 main.go:143] libmachine: Ensuring network default is active
	I1217 01:21:14.790143   47519 main.go:143] libmachine: Ensuring network mk-test-preload-634039 is active
	I1217 01:21:14.790584   47519 main.go:143] libmachine: getting domain XML...
	I1217 01:21:14.791552   47519 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-634039</name>
	  <uuid>efb07da3-6b3c-429f-9a9a-09b36c91d0ff</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/test-preload-634039.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:bc:e5:0b'/>
	      <source network='mk-test-preload-634039'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:03:e8:0f'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1217 01:21:16.054780   47519 main.go:143] libmachine: waiting for domain to start...
	I1217 01:21:16.056177   47519 main.go:143] libmachine: domain is now running
	I1217 01:21:16.056197   47519 main.go:143] libmachine: waiting for IP...
	I1217 01:21:16.056998   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:16.057584   47519 main.go:143] libmachine: domain test-preload-634039 has current primary IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:16.057599   47519 main.go:143] libmachine: found domain IP: 192.168.39.94
	I1217 01:21:16.057604   47519 main.go:143] libmachine: reserving static IP address...
	I1217 01:21:16.057988   47519 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-634039", mac: "52:54:00:bc:e5:0b", ip: "192.168.39.94"} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:19:54 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:16.058009   47519 main.go:143] libmachine: skip adding static IP to network mk-test-preload-634039 - found existing host DHCP lease matching {name: "test-preload-634039", mac: "52:54:00:bc:e5:0b", ip: "192.168.39.94"}
	I1217 01:21:16.058034   47519 main.go:143] libmachine: reserved static IP address 192.168.39.94 for domain test-preload-634039
	I1217 01:21:16.058044   47519 main.go:143] libmachine: waiting for SSH...
	I1217 01:21:16.058051   47519 main.go:143] libmachine: Getting to WaitForSSH function...
	I1217 01:21:16.060582   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:16.060927   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:19:54 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:16.060947   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:16.061148   47519 main.go:143] libmachine: Using SSH client type: native
	I1217 01:21:16.061362   47519 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1217 01:21:16.061373   47519 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1217 01:21:19.111308   47519 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.94:22: connect: no route to host
	I1217 01:21:25.191382   47519 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.94:22: connect: no route to host
	I1217 01:21:28.308710   47519 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:21:28.312635   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.313128   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.313156   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.313427   47519 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/config.json ...
	I1217 01:21:28.314315   47519 machine.go:94] provisionDockerMachine start ...
	I1217 01:21:28.316982   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.317468   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.317493   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.317653   47519 main.go:143] libmachine: Using SSH client type: native
	I1217 01:21:28.317846   47519 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1217 01:21:28.317857   47519 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:21:28.434226   47519 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1217 01:21:28.434257   47519 buildroot.go:166] provisioning hostname "test-preload-634039"
	I1217 01:21:28.437496   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.438037   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.438076   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.438329   47519 main.go:143] libmachine: Using SSH client type: native
	I1217 01:21:28.438564   47519 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1217 01:21:28.438579   47519 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-634039 && echo "test-preload-634039" | sudo tee /etc/hostname
	I1217 01:21:28.590813   47519 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-634039
	
	I1217 01:21:28.593738   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.594248   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.594275   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.594448   47519 main.go:143] libmachine: Using SSH client type: native
	I1217 01:21:28.594666   47519 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1217 01:21:28.594683   47519 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-634039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-634039/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-634039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:21:28.722772   47519 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:21:28.722804   47519 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 01:21:28.722844   47519 buildroot.go:174] setting up certificates
	I1217 01:21:28.722857   47519 provision.go:84] configureAuth start
	I1217 01:21:28.725996   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.726414   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.726444   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.728815   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.729258   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.729279   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.729392   47519 provision.go:143] copyHostCerts
	I1217 01:21:28.729444   47519 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem, removing ...
	I1217 01:21:28.729457   47519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem
	I1217 01:21:28.729529   47519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 01:21:28.729614   47519 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem, removing ...
	I1217 01:21:28.729622   47519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem
	I1217 01:21:28.729669   47519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 01:21:28.729756   47519 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem, removing ...
	I1217 01:21:28.729766   47519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem
	I1217 01:21:28.729797   47519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 01:21:28.729849   47519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.test-preload-634039 san=[127.0.0.1 192.168.39.94 localhost minikube test-preload-634039]
	I1217 01:21:28.965323   47519 provision.go:177] copyRemoteCerts
	I1217 01:21:28.965384   47519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:21:28.968647   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.968946   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:28.968964   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:28.969131   47519 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/id_rsa Username:docker}
	I1217 01:21:29.056581   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:21:29.087971   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:21:29.119190   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 01:21:29.149928   47519 provision.go:87] duration metric: took 427.039476ms to configureAuth
	I1217 01:21:29.149962   47519 buildroot.go:189] setting minikube options for container-runtime
	I1217 01:21:29.150179   47519 config.go:182] Loaded profile config "test-preload-634039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:21:29.153123   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.153478   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:29.153504   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.153675   47519 main.go:143] libmachine: Using SSH client type: native
	I1217 01:21:29.153901   47519 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1217 01:21:29.153916   47519 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:21:29.400280   47519 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:21:29.400313   47519 machine.go:97] duration metric: took 1.085983016s to provisionDockerMachine
	I1217 01:21:29.400328   47519 start.go:293] postStartSetup for "test-preload-634039" (driver="kvm2")
	I1217 01:21:29.400343   47519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:21:29.400406   47519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:21:29.403570   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.403950   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:29.403980   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.404132   47519 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/id_rsa Username:docker}
	I1217 01:21:29.491815   47519 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:21:29.496657   47519 info.go:137] Remote host: Buildroot 2025.02
	I1217 01:21:29.496690   47519 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 01:21:29.496767   47519 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 01:21:29.496875   47519 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem -> 170742.pem in /etc/ssl/certs
	I1217 01:21:29.496986   47519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:21:29.513986   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:21:29.558209   47519 start.go:296] duration metric: took 157.867543ms for postStartSetup
	I1217 01:21:29.558252   47519 fix.go:56] duration metric: took 14.773558255s for fixHost
	I1217 01:21:29.561263   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.561725   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:29.561758   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.561969   47519 main.go:143] libmachine: Using SSH client type: native
	I1217 01:21:29.562239   47519 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I1217 01:21:29.562255   47519 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 01:21:29.673856   47519 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765934489.632762100
	
	I1217 01:21:29.673877   47519 fix.go:216] guest clock: 1765934489.632762100
	I1217 01:21:29.673884   47519 fix.go:229] Guest: 2025-12-17 01:21:29.6327621 +0000 UTC Remote: 2025-12-17 01:21:29.558257849 +0000 UTC m=+14.875249347 (delta=74.504251ms)
	I1217 01:21:29.673900   47519 fix.go:200] guest clock delta is within tolerance: 74.504251ms
	I1217 01:21:29.673909   47519 start.go:83] releasing machines lock for "test-preload-634039", held for 14.889223593s
	I1217 01:21:29.676887   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.677329   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:29.677364   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.677894   47519 ssh_runner.go:195] Run: cat /version.json
	I1217 01:21:29.677967   47519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:21:29.680688   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.680841   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.681097   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:29.681126   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.681329   47519 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/id_rsa Username:docker}
	I1217 01:21:29.681361   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:29.681392   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:29.681544   47519 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/id_rsa Username:docker}
	I1217 01:21:29.785940   47519 ssh_runner.go:195] Run: systemctl --version
	I1217 01:21:29.792653   47519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:21:29.938293   47519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:21:29.945211   47519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:21:29.945269   47519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:21:29.964901   47519 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:21:29.964929   47519 start.go:496] detecting cgroup driver to use...
	I1217 01:21:29.965006   47519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:21:29.984417   47519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:21:30.001266   47519 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:21:30.001331   47519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:21:30.018238   47519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:21:30.034681   47519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:21:30.179903   47519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:21:30.397686   47519 docker.go:234] disabling docker service ...
	I1217 01:21:30.397751   47519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:21:30.414662   47519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:21:30.429559   47519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:21:30.589254   47519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:21:30.732715   47519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:21:30.749134   47519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:21:30.773112   47519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:21:30.773189   47519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.786092   47519 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:21:30.786159   47519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.799007   47519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.811773   47519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.824858   47519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:21:30.839231   47519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.852226   47519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.873116   47519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:21:30.885822   47519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:21:30.896719   47519 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 01:21:30.896774   47519 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 01:21:30.916797   47519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:21:30.928938   47519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:21:31.067330   47519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:21:31.176489   47519 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:21:31.176579   47519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:21:31.182491   47519 start.go:564] Will wait 60s for crictl version
	I1217 01:21:31.182545   47519 ssh_runner.go:195] Run: which crictl
	I1217 01:21:31.186673   47519 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:21:31.223496   47519 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 01:21:31.223583   47519 ssh_runner.go:195] Run: crio --version
	I1217 01:21:31.252186   47519 ssh_runner.go:195] Run: crio --version
	I1217 01:21:31.283567   47519 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1217 01:21:31.287542   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:31.287901   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:31.287923   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:31.288087   47519 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1217 01:21:31.292748   47519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:21:31.307560   47519 kubeadm.go:884] updating cluster {Name:test-preload-634039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-634039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:21:31.307694   47519 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:21:31.307760   47519 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:21:31.342174   47519 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1217 01:21:31.342265   47519 ssh_runner.go:195] Run: which lz4
	I1217 01:21:31.346710   47519 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 01:21:31.351774   47519 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 01:21:31.351802   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1217 01:21:32.703347   47519 crio.go:462] duration metric: took 1.356667212s to copy over tarball
	I1217 01:21:32.703419   47519 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 01:21:34.253341   47519 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.549893368s)
	I1217 01:21:34.253382   47519 crio.go:469] duration metric: took 1.550002887s to extract the tarball
	I1217 01:21:34.253391   47519 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 01:21:34.290248   47519 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:21:34.329888   47519 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:21:34.329916   47519 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:21:34.329923   47519 kubeadm.go:935] updating node { 192.168.39.94 8443 v1.34.2 crio true true} ...
	I1217 01:21:34.330014   47519 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-634039 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-634039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:21:34.330111   47519 ssh_runner.go:195] Run: crio config
	I1217 01:21:34.379839   47519 cni.go:84] Creating CNI manager for ""
	I1217 01:21:34.379861   47519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:21:34.379875   47519 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:21:34.379894   47519 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-634039 NodeName:test-preload-634039 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:21:34.379993   47519 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-634039"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.94"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:21:34.380074   47519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:21:34.392770   47519 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:21:34.392840   47519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:21:34.404085   47519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1217 01:21:34.424445   47519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:21:34.445198   47519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 01:21:34.466591   47519 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I1217 01:21:34.471074   47519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:21:34.487123   47519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:21:34.626521   47519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:21:34.646732   47519 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039 for IP: 192.168.39.94
	I1217 01:21:34.646758   47519 certs.go:195] generating shared ca certs ...
	I1217 01:21:34.646773   47519 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:21:34.646916   47519 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 01:21:34.646954   47519 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 01:21:34.646963   47519 certs.go:257] generating profile certs ...
	I1217 01:21:34.647080   47519 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.key
	I1217 01:21:34.647130   47519 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/apiserver.key.7cf3f648
	I1217 01:21:34.647167   47519 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/proxy-client.key
	I1217 01:21:34.647292   47519 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 01:21:34.647324   47519 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 01:21:34.647333   47519 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 01:21:34.647361   47519 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 01:21:34.647385   47519 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:21:34.647408   47519 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 01:21:34.647452   47519 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:21:34.648009   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:21:34.691084   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:21:34.723279   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:21:34.758268   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:21:34.788717   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 01:21:34.822885   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 01:21:34.852260   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:21:34.880953   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:21:34.911313   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:21:34.940066   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 01:21:34.968105   47519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 01:21:34.996695   47519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:21:35.017096   47519 ssh_runner.go:195] Run: openssl version
	I1217 01:21:35.023904   47519 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:21:35.035746   47519 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:21:35.047156   47519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:21:35.052463   47519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:21:35.052517   47519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:21:35.060043   47519 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:21:35.071404   47519 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:21:35.082433   47519 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 01:21:35.093691   47519 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 01:21:35.105361   47519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 01:21:35.110813   47519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 01:21:35.110887   47519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 01:21:35.118090   47519 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:21:35.130402   47519 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17074.pem /etc/ssl/certs/51391683.0
	I1217 01:21:35.142119   47519 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 01:21:35.153416   47519 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 01:21:35.165539   47519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 01:21:35.171070   47519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 01:21:35.171113   47519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 01:21:35.178348   47519 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:21:35.190452   47519 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/170742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:21:35.202792   47519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:21:35.208303   47519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:21:35.215914   47519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:21:35.223364   47519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:21:35.230716   47519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:21:35.237999   47519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:21:35.245075   47519 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:21:35.252349   47519 kubeadm.go:401] StartCluster: {Name:test-preload-634039 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-634039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:21:35.252441   47519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:21:35.252501   47519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:21:35.287662   47519 cri.go:89] found id: ""
	I1217 01:21:35.287751   47519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:21:35.300341   47519 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:21:35.300359   47519 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:21:35.300405   47519 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:21:35.312575   47519 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:21:35.312964   47519 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-634039" does not appear in /home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:21:35.313077   47519 kubeconfig.go:62] /home/jenkins/minikube-integration/22168-12839/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-634039" cluster setting kubeconfig missing "test-preload-634039" context setting]
	I1217 01:21:35.313335   47519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/kubeconfig: {Name:mk0867cff530c231805e36a9674d4fe6612173a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:21:35.313896   47519 kapi.go:59] client config for test-preload-634039: &rest.Config{Host:"https://192.168.39.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:21:35.314263   47519 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:21:35.314278   47519 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:21:35.314284   47519 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:21:35.314288   47519 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:21:35.314292   47519 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:21:35.314750   47519 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:21:35.326261   47519 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.94
	I1217 01:21:35.326301   47519 kubeadm.go:1161] stopping kube-system containers ...
	I1217 01:21:35.326313   47519 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 01:21:35.326355   47519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:21:35.366324   47519 cri.go:89] found id: ""
	I1217 01:21:35.366414   47519 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 01:21:35.391757   47519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:21:35.403833   47519 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:21:35.403853   47519 kubeadm.go:158] found existing configuration files:
	
	I1217 01:21:35.403896   47519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:21:35.414974   47519 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:21:35.415059   47519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:21:35.427247   47519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:21:35.438827   47519 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:21:35.438912   47519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:21:35.450603   47519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:21:35.462222   47519 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:21:35.462295   47519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:21:35.474234   47519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:21:35.485065   47519 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:21:35.485152   47519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:21:35.496907   47519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:21:35.509360   47519 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:21:35.564984   47519 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:21:36.180420   47519 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:21:36.429736   47519 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:21:36.506268   47519 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:21:36.596620   47519 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:21:36.596712   47519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:21:37.097137   47519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:21:37.597475   47519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:21:38.097638   47519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:21:38.597664   47519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:21:38.646852   47519 api_server.go:72] duration metric: took 2.050245999s to wait for apiserver process to appear ...
	I1217 01:21:38.646879   47519 api_server.go:88] waiting for apiserver healthz status ...
	I1217 01:21:38.646896   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:38.647421   47519 api_server.go:269] stopped: https://192.168.39.94:8443/healthz: Get "https://192.168.39.94:8443/healthz": dial tcp 192.168.39.94:8443: connect: connection refused
	I1217 01:21:39.147124   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:41.467978   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 01:21:41.468008   47519 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 01:21:41.468049   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:41.632484   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 01:21:41.632521   47519 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 01:21:41.647888   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:41.654893   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 01:21:41.654923   47519 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 01:21:42.147644   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:42.152514   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 01:21:42.152538   47519 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 01:21:42.647219   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:42.671760   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 01:21:42.671790   47519 api_server.go:103] status: https://192.168.39.94:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 01:21:43.147358   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:43.157794   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I1217 01:21:43.166812   47519 api_server.go:141] control plane version: v1.34.2
	I1217 01:21:43.166842   47519 api_server.go:131] duration metric: took 4.519956613s to wait for apiserver health ...
	I1217 01:21:43.166854   47519 cni.go:84] Creating CNI manager for ""
	I1217 01:21:43.166863   47519 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:21:43.168694   47519 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1217 01:21:43.169995   47519 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1217 01:21:43.185070   47519 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1217 01:21:43.210153   47519 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:21:43.217565   47519 system_pods.go:59] 7 kube-system pods found
	I1217 01:21:43.217630   47519 system_pods.go:61] "coredns-66bc5c9577-r6dkv" [42f1fbcd-08b1-4351-83bc-5c92e7666bad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:21:43.217644   47519 system_pods.go:61] "etcd-test-preload-634039" [fd34f376-81c9-4f8b-8625-aea320116b91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:21:43.217661   47519 system_pods.go:61] "kube-apiserver-test-preload-634039" [3eace9ae-b8bb-44fe-aa36-3f41eb24775a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:21:43.217677   47519 system_pods.go:61] "kube-controller-manager-test-preload-634039" [4cd1f0b7-eec0-4e53-81f5-5a1e94248d28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:21:43.217687   47519 system_pods.go:61] "kube-proxy-txwjp" [7ff4545d-718f-4d12-a385-54f546fd7283] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:21:43.217700   47519 system_pods.go:61] "kube-scheduler-test-preload-634039" [fab60024-fc08-4fa9-86eb-ed9e64dcd78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:21:43.217711   47519 system_pods.go:61] "storage-provisioner" [d275fc12-7534-4bdf-a56a-227b4d4c0eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:21:43.217725   47519 system_pods.go:74] duration metric: took 7.545527ms to wait for pod list to return data ...
	I1217 01:21:43.217738   47519 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:21:43.226234   47519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 01:21:43.226273   47519 node_conditions.go:123] node cpu capacity is 2
	I1217 01:21:43.226290   47519 node_conditions.go:105] duration metric: took 8.546217ms to run NodePressure ...
	I1217 01:21:43.226355   47519 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:21:43.501342   47519 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1217 01:21:43.505647   47519 kubeadm.go:744] kubelet initialised
	I1217 01:21:43.505676   47519 kubeadm.go:745] duration metric: took 4.303795ms waiting for restarted kubelet to initialise ...
	I1217 01:21:43.505693   47519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 01:21:43.520652   47519 ops.go:34] apiserver oom_adj: -16
	I1217 01:21:43.520674   47519 kubeadm.go:602] duration metric: took 8.220309043s to restartPrimaryControlPlane
	I1217 01:21:43.520686   47519 kubeadm.go:403] duration metric: took 8.268344297s to StartCluster
	I1217 01:21:43.520705   47519 settings.go:142] acquiring lock: {Name:mk0fa06a6a557f0851b041158306daec92094c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:21:43.520793   47519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:21:43.521346   47519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/kubeconfig: {Name:mk0867cff530c231805e36a9674d4fe6612173a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:21:43.521639   47519 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:21:43.521706   47519 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:21:43.521806   47519 addons.go:70] Setting storage-provisioner=true in profile "test-preload-634039"
	I1217 01:21:43.521825   47519 addons.go:239] Setting addon storage-provisioner=true in "test-preload-634039"
	W1217 01:21:43.521837   47519 addons.go:248] addon storage-provisioner should already be in state true
	I1217 01:21:43.521848   47519 config.go:182] Loaded profile config "test-preload-634039": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:21:43.521865   47519 host.go:66] Checking if "test-preload-634039" exists ...
	I1217 01:21:43.521848   47519 addons.go:70] Setting default-storageclass=true in profile "test-preload-634039"
	I1217 01:21:43.521941   47519 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-634039"
	I1217 01:21:43.523467   47519 out.go:179] * Verifying Kubernetes components...
	I1217 01:21:43.524295   47519 kapi.go:59] client config for test-preload-634039: &rest.Config{Host:"https://192.168.39.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:21:43.524527   47519 addons.go:239] Setting addon default-storageclass=true in "test-preload-634039"
	W1217 01:21:43.524539   47519 addons.go:248] addon default-storageclass should already be in state true
	I1217 01:21:43.524554   47519 host.go:66] Checking if "test-preload-634039" exists ...
	I1217 01:21:43.524903   47519 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:21:43.524922   47519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:21:43.526090   47519 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:21:43.526105   47519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 01:21:43.526117   47519 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 01:21:43.526126   47519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 01:21:43.528855   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:43.529008   47519 main.go:143] libmachine: domain test-preload-634039 has defined MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:43.529404   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:43.529443   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:43.529477   47519 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:e5:0b", ip: ""} in network mk-test-preload-634039: {Iface:virbr1 ExpiryTime:2025-12-17 02:21:26 +0000 UTC Type:0 Mac:52:54:00:bc:e5:0b Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:test-preload-634039 Clientid:01:52:54:00:bc:e5:0b}
	I1217 01:21:43.529511   47519 main.go:143] libmachine: domain test-preload-634039 has defined IP address 192.168.39.94 and MAC address 52:54:00:bc:e5:0b in network mk-test-preload-634039
	I1217 01:21:43.529641   47519 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/id_rsa Username:docker}
	I1217 01:21:43.529812   47519 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/test-preload-634039/id_rsa Username:docker}
	I1217 01:21:43.785894   47519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:21:43.829496   47519 node_ready.go:35] waiting up to 6m0s for node "test-preload-634039" to be "Ready" ...
	I1217 01:21:43.839255   47519 node_ready.go:49] node "test-preload-634039" is "Ready"
	I1217 01:21:43.839281   47519 node_ready.go:38] duration metric: took 9.746962ms for node "test-preload-634039" to be "Ready" ...
	I1217 01:21:43.839294   47519 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:21:43.839351   47519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:21:43.887941   47519 api_server.go:72] duration metric: took 366.270084ms to wait for apiserver process to appear ...
	I1217 01:21:43.887973   47519 api_server.go:88] waiting for apiserver healthz status ...
	I1217 01:21:43.887996   47519 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I1217 01:21:43.895138   47519 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I1217 01:21:43.898342   47519 api_server.go:141] control plane version: v1.34.2
	I1217 01:21:43.898384   47519 api_server.go:131] duration metric: took 10.401761ms to wait for apiserver health ...
	I1217 01:21:43.898397   47519 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:21:43.900150   47519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 01:21:43.908443   47519 system_pods.go:59] 7 kube-system pods found
	I1217 01:21:43.908482   47519 system_pods.go:61] "coredns-66bc5c9577-r6dkv" [42f1fbcd-08b1-4351-83bc-5c92e7666bad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:21:43.908493   47519 system_pods.go:61] "etcd-test-preload-634039" [fd34f376-81c9-4f8b-8625-aea320116b91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:21:43.908508   47519 system_pods.go:61] "kube-apiserver-test-preload-634039" [3eace9ae-b8bb-44fe-aa36-3f41eb24775a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:21:43.908517   47519 system_pods.go:61] "kube-controller-manager-test-preload-634039" [4cd1f0b7-eec0-4e53-81f5-5a1e94248d28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:21:43.908523   47519 system_pods.go:61] "kube-proxy-txwjp" [7ff4545d-718f-4d12-a385-54f546fd7283] Running
	I1217 01:21:43.908533   47519 system_pods.go:61] "kube-scheduler-test-preload-634039" [fab60024-fc08-4fa9-86eb-ed9e64dcd78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:21:43.908542   47519 system_pods.go:61] "storage-provisioner" [d275fc12-7534-4bdf-a56a-227b4d4c0eff] Running
	I1217 01:21:43.908550   47519 system_pods.go:74] duration metric: took 10.144813ms to wait for pod list to return data ...
	I1217 01:21:43.908563   47519 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:21:43.918846   47519 default_sa.go:45] found service account: "default"
	I1217 01:21:43.918874   47519 default_sa.go:55] duration metric: took 10.30429ms for default service account to be created ...
	I1217 01:21:43.918886   47519 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:21:43.928773   47519 system_pods.go:86] 7 kube-system pods found
	I1217 01:21:43.928815   47519 system_pods.go:89] "coredns-66bc5c9577-r6dkv" [42f1fbcd-08b1-4351-83bc-5c92e7666bad] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:21:43.928830   47519 system_pods.go:89] "etcd-test-preload-634039" [fd34f376-81c9-4f8b-8625-aea320116b91] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:21:43.928845   47519 system_pods.go:89] "kube-apiserver-test-preload-634039" [3eace9ae-b8bb-44fe-aa36-3f41eb24775a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:21:43.928869   47519 system_pods.go:89] "kube-controller-manager-test-preload-634039" [4cd1f0b7-eec0-4e53-81f5-5a1e94248d28] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:21:43.928879   47519 system_pods.go:89] "kube-proxy-txwjp" [7ff4545d-718f-4d12-a385-54f546fd7283] Running
	I1217 01:21:43.928888   47519 system_pods.go:89] "kube-scheduler-test-preload-634039" [fab60024-fc08-4fa9-86eb-ed9e64dcd78e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:21:43.928897   47519 system_pods.go:89] "storage-provisioner" [d275fc12-7534-4bdf-a56a-227b4d4c0eff] Running
	I1217 01:21:43.928912   47519 system_pods.go:126] duration metric: took 10.018363ms to wait for k8s-apps to be running ...
	I1217 01:21:43.928927   47519 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:21:43.928991   47519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:21:44.078795   47519 system_svc.go:56] duration metric: took 149.862454ms WaitForService to wait for kubelet
	I1217 01:21:44.078826   47519 kubeadm.go:587] duration metric: took 557.158356ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:21:44.078846   47519 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:21:44.082805   47519 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 01:21:44.082825   47519 node_conditions.go:123] node cpu capacity is 2
	I1217 01:21:44.082836   47519 node_conditions.go:105] duration metric: took 3.984969ms to run NodePressure ...
	I1217 01:21:44.082846   47519 start.go:242] waiting for startup goroutines ...
	I1217 01:21:44.108418   47519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:21:44.725265   47519 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 01:21:44.726591   47519 addons.go:530] duration metric: took 1.204890178s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 01:21:44.726633   47519 start.go:247] waiting for cluster config update ...
	I1217 01:21:44.726648   47519 start.go:256] writing updated cluster config ...
	I1217 01:21:44.726943   47519 ssh_runner.go:195] Run: rm -f paused
	I1217 01:21:44.732810   47519 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:21:44.733276   47519 kapi.go:59] client config for test-preload-634039: &rest.Config{Host:"https://192.168.39.94:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.crt", KeyFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/profiles/test-preload-634039/client.key", CAFile:"/home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:21:44.736749   47519 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r6dkv" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 01:21:46.745454   47519 pod_ready.go:104] pod "coredns-66bc5c9577-r6dkv" is not "Ready", error: <nil>
	W1217 01:21:49.244057   47519 pod_ready.go:104] pod "coredns-66bc5c9577-r6dkv" is not "Ready", error: <nil>
	W1217 01:21:51.742529   47519 pod_ready.go:104] pod "coredns-66bc5c9577-r6dkv" is not "Ready", error: <nil>
	I1217 01:21:53.243473   47519 pod_ready.go:94] pod "coredns-66bc5c9577-r6dkv" is "Ready"
	I1217 01:21:53.243506   47519 pod_ready.go:86] duration metric: took 8.506739619s for pod "coredns-66bc5c9577-r6dkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:53.247799   47519 pod_ready.go:83] waiting for pod "etcd-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 01:21:55.255133   47519 pod_ready.go:104] pod "etcd-test-preload-634039" is not "Ready", error: <nil>
	I1217 01:21:57.254179   47519 pod_ready.go:94] pod "etcd-test-preload-634039" is "Ready"
	I1217 01:21:57.254243   47519 pod_ready.go:86] duration metric: took 4.006406557s for pod "etcd-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.257161   47519 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.261170   47519 pod_ready.go:94] pod "kube-apiserver-test-preload-634039" is "Ready"
	I1217 01:21:57.261196   47519 pod_ready.go:86] duration metric: took 4.013777ms for pod "kube-apiserver-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.263453   47519 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.267909   47519 pod_ready.go:94] pod "kube-controller-manager-test-preload-634039" is "Ready"
	I1217 01:21:57.267926   47519 pod_ready.go:86] duration metric: took 4.455892ms for pod "kube-controller-manager-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.270312   47519 pod_ready.go:83] waiting for pod "kube-proxy-txwjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.452651   47519 pod_ready.go:94] pod "kube-proxy-txwjp" is "Ready"
	I1217 01:21:57.452695   47519 pod_ready.go:86] duration metric: took 182.352845ms for pod "kube-proxy-txwjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:57.652380   47519 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:58.052301   47519 pod_ready.go:94] pod "kube-scheduler-test-preload-634039" is "Ready"
	I1217 01:21:58.052327   47519 pod_ready.go:86] duration metric: took 399.91905ms for pod "kube-scheduler-test-preload-634039" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:21:58.052342   47519 pod_ready.go:40] duration metric: took 13.319497198s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:21:58.095451   47519 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:21:58.097351   47519 out.go:179] * Done! kubectl is now configured to use "test-preload-634039" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.879703917Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765934518879678993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26fc3edb-74b8-441a-be3a-d75b1adbf319 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.881052219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=08c5e83c-7c8a-4524-9dd3-f45579a7ddf3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.881166147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=08c5e83c-7c8a-4524-9dd3-f45579a7ddf3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.881321919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f8bf19eca16cae3c35897210c0158ec96aca5041ce7479a75b2e215815c388,PodSandboxId:1b766c9bacb8ffb4e4cd65218cfe226100cfa94add65334da37b3b40e80f1f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765934506638054199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r6dkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f1fbcd-08b1-4351-83bc-5c92e7666bad,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17276c4c1cb9f66d5721df6e968cca1c4e26d4e6c2f7a8233d767096fba05a5,PodSandboxId:f915480ff1955f2cef0d5f710bed6862618ae2966b2a6f7da37e2410c76d2b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765934502989547021,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txwjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff4545d-718f-4d12-a385-54f546fd7283,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b3db8438f75f0201c3aac223101357711515f65a3a7ff75d5067e042c9661b,PodSandboxId:e393e76c389ff6cc85353b7971ce5a260064adf602fe9d2c1268ba6ab8b7396d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765934502986977641,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d275fc12-7534-4bdf-a56a-227b4d4c0eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49af30f13a36c3f32b4c76e007d3e29d634e5f49be62ea4ca81ce9fcc984fa73,PodSandboxId:31f03cb5dca41d2303ba9f59ed52db22b313ebf2e3a8161700b77568d4d7c770,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765934498412732606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa2c580a36afebca52b018ced9d92d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a195686b8ce7d1e1e0a940015ff803180039869f3f8b4f65b3358c6cfc88e79,PodSandboxId:b4f1276eceed0e623a311282da63016ff2402836c704fa94672bcaa347a03d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUN
NING,CreatedAt:1765934498376542614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b227c5bf1a3484e8e22c782a8a6ef4be,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3403c8ae47034f6359cc0b6f42c522af04682a4c11e80e397e643564580f2bbf,PodSandboxId:93c96dbf91e1e97cd48e915deb7de415fb1de920c5ec1888101baa4df59d9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765934498368096431,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f773b62d9182f412470684d56d92a2aa,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127562d90b8bb607d2eda659dda63c0c98dca7cede42f21743eb250eb8d89e7,PodSandboxId:261e58c6924a75b77abc8c3077e7664eb20bf78fa7b8762a2b50c5870e2f6016,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765934498391240318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6e7e99171745df1793cde7b64e3908,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=08c5e83c-7c8a-4524-9dd3-f45579a7ddf3 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.915048510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a06cbb60-8e5d-4944-a6f0-af57e5e17d73 name=/runtime.v1.RuntimeService/Version
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.915174729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a06cbb60-8e5d-4944-a6f0-af57e5e17d73 name=/runtime.v1.RuntimeService/Version
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.917106095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=293a7d17-dfde-4160-acda-2f9870419507 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.918216168Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765934518918188388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=293a7d17-dfde-4160-acda-2f9870419507 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.919260161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de19eef0-8614-4be5-b2f8-86143180e34f name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.919316649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de19eef0-8614-4be5-b2f8-86143180e34f name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.919484729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f8bf19eca16cae3c35897210c0158ec96aca5041ce7479a75b2e215815c388,PodSandboxId:1b766c9bacb8ffb4e4cd65218cfe226100cfa94add65334da37b3b40e80f1f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765934506638054199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r6dkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f1fbcd-08b1-4351-83bc-5c92e7666bad,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17276c4c1cb9f66d5721df6e968cca1c4e26d4e6c2f7a8233d767096fba05a5,PodSandboxId:f915480ff1955f2cef0d5f710bed6862618ae2966b2a6f7da37e2410c76d2b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765934502989547021,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txwjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff4545d-718f-4d12-a385-54f546fd7283,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b3db8438f75f0201c3aac223101357711515f65a3a7ff75d5067e042c9661b,PodSandboxId:e393e76c389ff6cc85353b7971ce5a260064adf602fe9d2c1268ba6ab8b7396d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765934502986977641,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d275fc12-7534-4bdf-a56a-227b4d4c0eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49af30f13a36c3f32b4c76e007d3e29d634e5f49be62ea4ca81ce9fcc984fa73,PodSandboxId:31f03cb5dca41d2303ba9f59ed52db22b313ebf2e3a8161700b77568d4d7c770,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765934498412732606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa2c580a36afebca52b018ced9d92d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a195686b8ce7d1e1e0a940015ff803180039869f3f8b4f65b3358c6cfc88e79,PodSandboxId:b4f1276eceed0e623a311282da63016ff2402836c704fa94672bcaa347a03d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUN
NING,CreatedAt:1765934498376542614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b227c5bf1a3484e8e22c782a8a6ef4be,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3403c8ae47034f6359cc0b6f42c522af04682a4c11e80e397e643564580f2bbf,PodSandboxId:93c96dbf91e1e97cd48e915deb7de415fb1de920c5ec1888101baa4df59d9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765934498368096431,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f773b62d9182f412470684d56d92a2aa,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127562d90b8bb607d2eda659dda63c0c98dca7cede42f21743eb250eb8d89e7,PodSandboxId:261e58c6924a75b77abc8c3077e7664eb20bf78fa7b8762a2b50c5870e2f6016,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765934498391240318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6e7e99171745df1793cde7b64e3908,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de19eef0-8614-4be5-b2f8-86143180e34f name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.956877002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3babc23a-69c9-4826-b181-7a7462651a58 name=/runtime.v1.RuntimeService/Version
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.957093859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3babc23a-69c9-4826-b181-7a7462651a58 name=/runtime.v1.RuntimeService/Version
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.958286855Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=018fec87-d7b1-4c2c-8bf1-a5271c7d67ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.958765067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765934518958717219,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=018fec87-d7b1-4c2c-8bf1-a5271c7d67ee name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.959681841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ea46d42-336f-4d8f-9260-549990a83191 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.959778441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ea46d42-336f-4d8f-9260-549990a83191 name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.959936304Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f8bf19eca16cae3c35897210c0158ec96aca5041ce7479a75b2e215815c388,PodSandboxId:1b766c9bacb8ffb4e4cd65218cfe226100cfa94add65334da37b3b40e80f1f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765934506638054199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r6dkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f1fbcd-08b1-4351-83bc-5c92e7666bad,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17276c4c1cb9f66d5721df6e968cca1c4e26d4e6c2f7a8233d767096fba05a5,PodSandboxId:f915480ff1955f2cef0d5f710bed6862618ae2966b2a6f7da37e2410c76d2b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765934502989547021,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txwjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff4545d-718f-4d12-a385-54f546fd7283,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b3db8438f75f0201c3aac223101357711515f65a3a7ff75d5067e042c9661b,PodSandboxId:e393e76c389ff6cc85353b7971ce5a260064adf602fe9d2c1268ba6ab8b7396d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765934502986977641,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d275fc12-7534-4bdf-a56a-227b4d4c0eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49af30f13a36c3f32b4c76e007d3e29d634e5f49be62ea4ca81ce9fcc984fa73,PodSandboxId:31f03cb5dca41d2303ba9f59ed52db22b313ebf2e3a8161700b77568d4d7c770,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765934498412732606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa2c580a36afebca52b018ced9d92d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a195686b8ce7d1e1e0a940015ff803180039869f3f8b4f65b3358c6cfc88e79,PodSandboxId:b4f1276eceed0e623a311282da63016ff2402836c704fa94672bcaa347a03d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUN
NING,CreatedAt:1765934498376542614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b227c5bf1a3484e8e22c782a8a6ef4be,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3403c8ae47034f6359cc0b6f42c522af04682a4c11e80e397e643564580f2bbf,PodSandboxId:93c96dbf91e1e97cd48e915deb7de415fb1de920c5ec1888101baa4df59d9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765934498368096431,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f773b62d9182f412470684d56d92a2aa,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127562d90b8bb607d2eda659dda63c0c98dca7cede42f21743eb250eb8d89e7,PodSandboxId:261e58c6924a75b77abc8c3077e7664eb20bf78fa7b8762a2b50c5870e2f6016,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765934498391240318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6e7e99171745df1793cde7b64e3908,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ea46d42-336f-4d8f-9260-549990a83191 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.989281031Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a16d364-4c3d-4279-b6d7-c17309c29b33 name=/runtime.v1.RuntimeService/Version
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.989371896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a16d364-4c3d-4279-b6d7-c17309c29b33 name=/runtime.v1.RuntimeService/Version
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.990846316Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=485845b5-e5dd-4f35-886a-8a046ec4b5bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.991256287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765934518991232849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=485845b5-e5dd-4f35-886a-8a046ec4b5bf name=/runtime.v1.ImageService/ImageFsInfo
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.992405995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f328369-00b8-4fa5-be74-71ff54a7b83b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.992458216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f328369-00b8-4fa5-be74-71ff54a7b83b name=/runtime.v1.RuntimeService/ListContainers
	Dec 17 01:21:58 test-preload-634039 crio[836]: time="2025-12-17 01:21:58.992737642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:26f8bf19eca16cae3c35897210c0158ec96aca5041ce7479a75b2e215815c388,PodSandboxId:1b766c9bacb8ffb4e4cd65218cfe226100cfa94add65334da37b3b40e80f1f4f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765934506638054199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-r6dkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42f1fbcd-08b1-4351-83bc-5c92e7666bad,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f17276c4c1cb9f66d5721df6e968cca1c4e26d4e6c2f7a8233d767096fba05a5,PodSandboxId:f915480ff1955f2cef0d5f710bed6862618ae2966b2a6f7da37e2410c76d2b12,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765934502989547021,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-txwjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ff4545d-718f-4d12-a385-54f546fd7283,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90b3db8438f75f0201c3aac223101357711515f65a3a7ff75d5067e042c9661b,PodSandboxId:e393e76c389ff6cc85353b7971ce5a260064adf602fe9d2c1268ba6ab8b7396d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765934502986977641,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d275fc12-7534-4bdf-a56a-227b4d4c0eff,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49af30f13a36c3f32b4c76e007d3e29d634e5f49be62ea4ca81ce9fcc984fa73,PodSandboxId:31f03cb5dca41d2303ba9f59ed52db22b313ebf2e3a8161700b77568d4d7c770,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765934498412732606,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9aa2c580a36afebca52b018ced9d92d0,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a195686b8ce7d1e1e0a940015ff803180039869f3f8b4f65b3358c6cfc88e79,PodSandboxId:b4f1276eceed0e623a311282da63016ff2402836c704fa94672bcaa347a03d7a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUN
NING,CreatedAt:1765934498376542614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b227c5bf1a3484e8e22c782a8a6ef4be,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3403c8ae47034f6359cc0b6f42c522af04682a4c11e80e397e643564580f2bbf,PodSandboxId:93c96dbf91e1e97cd48e915deb7de415fb1de920c5ec1888101baa4df59d9a25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},U
serSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765934498368096431,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f773b62d9182f412470684d56d92a2aa,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4127562d90b8bb607d2eda659dda63c0c98dca7cede42f21743eb250eb8d89e7,PodSandboxId:261e58c6924a75b77abc8c3077e7664eb20bf78fa7b8762a2b50c5870e2f6016,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765934498391240318,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-634039,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6e7e99171745df1793cde7b64e3908,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f328369-00b8-4fa5-be74-71ff54a7b83b name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	26f8bf19eca16       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   1                   1b766c9bacb8f       coredns-66bc5c9577-r6dkv                      kube-system
	f17276c4c1cb9       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   f915480ff1955       kube-proxy-txwjp                              kube-system
	90b3db8438f75       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   e393e76c389ff       storage-provisioner                           kube-system
	49af30f13a36c       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   31f03cb5dca41       etcd-test-preload-634039                      kube-system
	4127562d90b8b       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   261e58c6924a7       kube-apiserver-test-preload-634039            kube-system
	9a195686b8ce7       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   b4f1276eceed0       kube-controller-manager-test-preload-634039   kube-system
	3403c8ae47034       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   93c96dbf91e1e       kube-scheduler-test-preload-634039            kube-system
	
	
	==> coredns [26f8bf19eca16cae3c35897210c0158ec96aca5041ce7479a75b2e215815c388] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43033 - 23808 "HINFO IN 187592230366378253.9152040962219228733. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.039679261s
	
	
	==> describe nodes <==
	Name:               test-preload-634039
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-634039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c7bb9b74fe8fa422b352c813eb039f077f405cb1
	                    minikube.k8s.io/name=test-preload-634039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T01_20_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 01:20:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-634039
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 01:21:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 01:21:43 +0000   Wed, 17 Dec 2025 01:20:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 01:21:43 +0000   Wed, 17 Dec 2025 01:20:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 01:21:43 +0000   Wed, 17 Dec 2025 01:20:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 01:21:43 +0000   Wed, 17 Dec 2025 01:21:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    test-preload-634039
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 efb07da36b3c429f9a9a09b36c91d0ff
	  System UUID:                efb07da3-6b3c-429f-9a9a-09b36c91d0ff
	  Boot ID:                    7f15dd01-4688-4d07-b483-05bed0d84e76
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r6dkv                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     89s
	  kube-system                 etcd-test-preload-634039                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-test-preload-634039             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-test-preload-634039    200m (10%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-txwjp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-test-preload-634039             100m (5%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 87s                  kube-proxy       
	  Normal   Starting                 15s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node test-preload-634039 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node test-preload-634039 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x7 over 101s)  kubelet          Node test-preload-634039 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     94s                  kubelet          Node test-preload-634039 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  94s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  94s                  kubelet          Node test-preload-634039 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s                  kubelet          Node test-preload-634039 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 94s                  kubelet          Starting kubelet.
	  Normal   NodeReady                93s                  kubelet          Node test-preload-634039 status is now: NodeReady
	  Normal   RegisteredNode           90s                  node-controller  Node test-preload-634039 event: Registered Node test-preload-634039 in Controller
	  Normal   Starting                 23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-634039 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-634039 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-634039 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18s                  kubelet          Node test-preload-634039 has been rebooted, boot id: 7f15dd01-4688-4d07-b483-05bed0d84e76
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-634039 event: Registered Node test-preload-634039 in Controller
	
	
	==> dmesg <==
	[Dec17 01:21] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000059] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001763] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.976118] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.105701] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.602017] kauditd_printk_skb: 196 callbacks suppressed
	[  +6.758990] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [49af30f13a36c3f32b4c76e007d3e29d634e5f49be62ea4ca81ce9fcc984fa73] <==
	{"level":"warn","ts":"2025-12-17T01:21:40.221258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.243399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.259848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.292522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.294557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.307322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.321073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.331242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.343660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.357269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.369300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.391939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.412160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.424012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.439739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.453747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.476576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.487482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.516899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.535339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.560526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.598641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.622827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.637276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T01:21:40.694679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59404","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:21:59 up 0 min,  0 users,  load average: 1.40, 0.37, 0.13
	Linux test-preload-634039 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4127562d90b8bb607d2eda659dda63c0c98dca7cede42f21743eb250eb8d89e7] <==
	I1217 01:21:41.632203       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 01:21:41.632335       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 01:21:41.632344       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 01:21:41.633139       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 01:21:41.633199       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 01:21:41.633288       1 aggregator.go:171] initial CRD sync complete...
	I1217 01:21:41.633312       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 01:21:41.633317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 01:21:41.633321       1 cache.go:39] Caches are synced for autoregister controller
	I1217 01:21:41.635528       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 01:21:41.637494       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 01:21:41.647457       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 01:21:41.647565       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 01:21:41.647682       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 01:21:41.648776       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1217 01:21:41.651295       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 01:21:42.441031       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 01:21:42.617547       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 01:21:43.335441       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 01:21:43.387726       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 01:21:43.425226       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 01:21:43.433227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 01:21:45.027684       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 01:21:45.328399       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 01:21:45.378342       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9a195686b8ce7d1e1e0a940015ff803180039869f3f8b4f65b3358c6cfc88e79] <==
	I1217 01:21:45.023948       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 01:21:45.024938       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 01:21:45.024994       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1217 01:21:45.025006       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 01:21:45.026064       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 01:21:45.027142       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 01:21:45.030529       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 01:21:45.032872       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 01:21:45.032856       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 01:21:45.037144       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 01:21:45.042569       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 01:21:45.042580       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1217 01:21:45.042632       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 01:21:45.043779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 01:21:45.046057       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 01:21:45.046123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 01:21:45.051470       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 01:21:45.057868       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 01:21:45.062275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 01:21:45.062386       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 01:21:45.074080       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 01:21:45.074094       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 01:21:45.074286       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 01:21:45.077657       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 01:21:45.089961       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [f17276c4c1cb9f66d5721df6e968cca1c4e26d4e6c2f7a8233d767096fba05a5] <==
	I1217 01:21:43.255701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 01:21:43.356094       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 01:21:43.356145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.94"]
	E1217 01:21:43.356232       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 01:21:43.430193       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1217 01:21:43.430295       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1217 01:21:43.430326       1 server_linux.go:132] "Using iptables Proxier"
	I1217 01:21:43.444915       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 01:21:43.445272       1 server.go:527] "Version info" version="v1.34.2"
	I1217 01:21:43.445306       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:21:43.448062       1 config.go:200] "Starting service config controller"
	I1217 01:21:43.448147       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 01:21:43.448165       1 config.go:106] "Starting endpoint slice config controller"
	I1217 01:21:43.448168       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 01:21:43.448447       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 01:21:43.448454       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 01:21:43.452143       1 config.go:309] "Starting node config controller"
	I1217 01:21:43.452222       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 01:21:43.452230       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 01:21:43.548570       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 01:21:43.548644       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 01:21:43.548677       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3403c8ae47034f6359cc0b6f42c522af04682a4c11e80e397e643564580f2bbf] <==
	I1217 01:21:39.599271       1 serving.go:386] Generated self-signed cert in-memory
	I1217 01:21:41.639084       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1217 01:21:41.639128       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 01:21:41.651902       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 01:21:41.651944       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 01:21:41.652064       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 01:21:41.652097       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 01:21:41.652113       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 01:21:41.652119       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 01:21:41.654077       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 01:21:41.654201       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 01:21:41.752871       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 01:21:41.752944       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1217 01:21:41.753239       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 01:21:41 test-preload-634039 kubelet[1190]: I1217 01:21:41.740659    1190 kubelet_node_status.go:124] "Node was previously registered" node="test-preload-634039"
	Dec 17 01:21:41 test-preload-634039 kubelet[1190]: I1217 01:21:41.740767    1190 kubelet_node_status.go:78] "Successfully registered node" node="test-preload-634039"
	Dec 17 01:21:41 test-preload-634039 kubelet[1190]: I1217 01:21:41.740794    1190 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 01:21:41 test-preload-634039 kubelet[1190]: I1217 01:21:41.742328    1190 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 01:21:41 test-preload-634039 kubelet[1190]: I1217 01:21:41.743742    1190 setters.go:543] "Node became not ready" node="test-preload-634039" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-17T01:21:41Z","lastTransitionTime":"2025-12-17T01:21:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: I1217 01:21:42.405506    1190 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-634039"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: E1217 01:21:42.415055    1190 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-634039\" already exists" pod="kube-system/kube-controller-manager-test-preload-634039"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: I1217 01:21:42.495699    1190 apiserver.go:52] "Watching apiserver"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: E1217 01:21:42.501542    1190 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-r6dkv" podUID="42f1fbcd-08b1-4351-83bc-5c92e7666bad"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: I1217 01:21:42.521535    1190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: I1217 01:21:42.605314    1190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ff4545d-718f-4d12-a385-54f546fd7283-lib-modules\") pod \"kube-proxy-txwjp\" (UID: \"7ff4545d-718f-4d12-a385-54f546fd7283\") " pod="kube-system/kube-proxy-txwjp"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: I1217 01:21:42.605357    1190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ff4545d-718f-4d12-a385-54f546fd7283-xtables-lock\") pod \"kube-proxy-txwjp\" (UID: \"7ff4545d-718f-4d12-a385-54f546fd7283\") " pod="kube-system/kube-proxy-txwjp"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: I1217 01:21:42.605388    1190 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d275fc12-7534-4bdf-a56a-227b4d4c0eff-tmp\") pod \"storage-provisioner\" (UID: \"d275fc12-7534-4bdf-a56a-227b4d4c0eff\") " pod="kube-system/storage-provisioner"
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: E1217 01:21:42.606810    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 01:21:42 test-preload-634039 kubelet[1190]: E1217 01:21:42.608666    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42f1fbcd-08b1-4351-83bc-5c92e7666bad-config-volume podName:42f1fbcd-08b1-4351-83bc-5c92e7666bad nodeName:}" failed. No retries permitted until 2025-12-17 01:21:43.108381857 +0000 UTC m=+6.700658783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/42f1fbcd-08b1-4351-83bc-5c92e7666bad-config-volume") pod "coredns-66bc5c9577-r6dkv" (UID: "42f1fbcd-08b1-4351-83bc-5c92e7666bad") : object "kube-system"/"coredns" not registered
	Dec 17 01:21:43 test-preload-634039 kubelet[1190]: E1217 01:21:43.108455    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 01:21:43 test-preload-634039 kubelet[1190]: E1217 01:21:43.108546    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42f1fbcd-08b1-4351-83bc-5c92e7666bad-config-volume podName:42f1fbcd-08b1-4351-83bc-5c92e7666bad nodeName:}" failed. No retries permitted until 2025-12-17 01:21:44.108527443 +0000 UTC m=+7.700804375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/42f1fbcd-08b1-4351-83bc-5c92e7666bad-config-volume") pod "coredns-66bc5c9577-r6dkv" (UID: "42f1fbcd-08b1-4351-83bc-5c92e7666bad") : object "kube-system"/"coredns" not registered
	Dec 17 01:21:43 test-preload-634039 kubelet[1190]: I1217 01:21:43.646922    1190 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 01:21:44 test-preload-634039 kubelet[1190]: E1217 01:21:44.117089    1190 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 17 01:21:44 test-preload-634039 kubelet[1190]: E1217 01:21:44.117530    1190 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42f1fbcd-08b1-4351-83bc-5c92e7666bad-config-volume podName:42f1fbcd-08b1-4351-83bc-5c92e7666bad nodeName:}" failed. No retries permitted until 2025-12-17 01:21:46.1175072 +0000 UTC m=+9.709784145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/42f1fbcd-08b1-4351-83bc-5c92e7666bad-config-volume") pod "coredns-66bc5c9577-r6dkv" (UID: "42f1fbcd-08b1-4351-83bc-5c92e7666bad") : object "kube-system"/"coredns" not registered
	Dec 17 01:21:46 test-preload-634039 kubelet[1190]: E1217 01:21:46.583744    1190 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765934506583173264 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 17 01:21:46 test-preload-634039 kubelet[1190]: E1217 01:21:46.584050    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765934506583173264 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 17 01:21:53 test-preload-634039 kubelet[1190]: I1217 01:21:53.181066    1190 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 01:21:56 test-preload-634039 kubelet[1190]: E1217 01:21:56.586008    1190 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765934516585162003 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 17 01:21:56 test-preload-634039 kubelet[1190]: E1217 01:21:56.586829    1190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765934516585162003 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [90b3db8438f75f0201c3aac223101357711515f65a3a7ff75d5067e042c9661b] <==
	I1217 01:21:43.126826       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-634039 -n test-preload-634039
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-634039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-634039" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-634039
--- FAIL: TestPreload (141.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-716229 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-716229 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.383059425s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-716229] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-716229" primary control-plane node in "pause-716229" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-716229" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:30:19.435564   55831 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:30:19.435724   55831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:30:19.435736   55831 out.go:374] Setting ErrFile to fd 2...
	I1217 01:30:19.435742   55831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:30:19.436062   55831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:30:19.436673   55831 out.go:368] Setting JSON to false
	I1217 01:30:19.437941   55831 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7965,"bootTime":1765927054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 01:30:19.437992   55831 start.go:143] virtualization: kvm guest
	I1217 01:30:19.440813   55831 out.go:179] * [pause-716229] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 01:30:19.444039   55831 notify.go:221] Checking for updates...
	I1217 01:30:19.444067   55831 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:30:19.445577   55831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:30:19.447011   55831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:30:19.448616   55831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 01:30:19.449986   55831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 01:30:19.451166   55831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:30:19.453108   55831 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:19.453690   55831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:30:19.494887   55831 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 01:30:19.495979   55831 start.go:309] selected driver: kvm2
	I1217 01:30:19.495995   55831 start.go:927] validating driver "kvm2" against &{Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:19.496174   55831 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:30:19.497122   55831 cni.go:84] Creating CNI manager for ""
	I1217 01:30:19.497194   55831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:19.497254   55831 start.go:353] cluster config:
	{Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:19.497389   55831 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:30:19.499240   55831 out.go:179] * Starting "pause-716229" primary control-plane node in "pause-716229" cluster
	I1217 01:30:19.500444   55831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:30:19.500486   55831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 01:30:19.500499   55831 cache.go:65] Caching tarball of preloaded images
	I1217 01:30:19.500597   55831 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 01:30:19.500610   55831 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:30:19.500750   55831 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/config.json ...
	I1217 01:30:19.500981   55831 start.go:360] acquireMachinesLock for pause-716229: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 01:30:19.501078   55831 start.go:364] duration metric: took 78.023µs to acquireMachinesLock for "pause-716229"
	I1217 01:30:19.501097   55831 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:30:19.501104   55831 fix.go:54] fixHost starting: 
	I1217 01:30:19.503236   55831 fix.go:112] recreateIfNeeded on pause-716229: state=Running err=<nil>
	W1217 01:30:19.503273   55831 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:30:19.504720   55831 out.go:252] * Updating the running kvm2 "pause-716229" VM ...
	I1217 01:30:19.504749   55831 machine.go:94] provisionDockerMachine start ...
	I1217 01:30:19.507267   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.507772   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.507796   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.507953   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.508187   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.508203   55831 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:30:19.638055   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-716229
	
	I1217 01:30:19.638093   55831 buildroot.go:166] provisioning hostname "pause-716229"
	I1217 01:30:19.643779   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.644347   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.644374   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.644571   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.644819   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.644833   55831 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-716229 && echo "pause-716229" | sudo tee /etc/hostname
	I1217 01:30:19.787607   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-716229
	
	I1217 01:30:19.791566   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.792103   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.792138   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.792331   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.792620   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.792645   55831 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-716229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-716229/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-716229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:30:19.917161   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:30:19.917193   55831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 01:30:19.917223   55831 buildroot.go:174] setting up certificates
	I1217 01:30:19.917236   55831 provision.go:84] configureAuth start
	I1217 01:30:19.920490   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.921061   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.921099   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924101   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924504   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.924527   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924692   55831 provision.go:143] copyHostCerts
	I1217 01:30:19.924752   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem, removing ...
	I1217 01:30:19.924772   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem
	I1217 01:30:19.924841   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 01:30:19.924988   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem, removing ...
	I1217 01:30:19.925002   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem
	I1217 01:30:19.925047   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 01:30:19.925151   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem, removing ...
	I1217 01:30:19.925175   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem
	I1217 01:30:19.925208   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 01:30:19.925360   55831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.pause-716229 san=[127.0.0.1 192.168.61.9 localhost minikube pause-716229]
	I1217 01:30:19.984817   55831 provision.go:177] copyRemoteCerts
	I1217 01:30:19.984903   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:30:19.987915   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.988514   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.988557   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.988798   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:20.085814   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:30:20.125959   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 01:30:20.166703   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:30:20.208381   55831 provision.go:87] duration metric: took 291.127344ms to configureAuth
	I1217 01:30:20.208410   55831 buildroot.go:189] setting minikube options for container-runtime
	I1217 01:30:20.208679   55831 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:20.212425   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:20.212953   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:20.212990   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:20.213266   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:20.213561   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:20.213591   55831 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:30:25.882124   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:30:25.882158   55831 machine.go:97] duration metric: took 6.37739993s to provisionDockerMachine
	I1217 01:30:25.882173   55831 start.go:293] postStartSetup for "pause-716229" (driver="kvm2")
	I1217 01:30:25.882210   55831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:30:25.882298   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:30:25.886127   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:25.886654   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:25.886683   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:25.886836   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:25.981319   55831 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:30:25.986394   55831 info.go:137] Remote host: Buildroot 2025.02
	I1217 01:30:25.986418   55831 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 01:30:25.986487   55831 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 01:30:25.986592   55831 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem -> 170742.pem in /etc/ssl/certs
	I1217 01:30:25.986710   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:30:25.999107   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:26.032262   55831 start.go:296] duration metric: took 150.072721ms for postStartSetup
	I1217 01:30:26.032309   55831 fix.go:56] duration metric: took 6.531204073s for fixHost
	I1217 01:30:26.035448   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.035824   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.035847   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.036044   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:26.036305   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:26.036319   55831 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 01:30:26.159443   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765935026.155536878
	
	I1217 01:30:26.159474   55831 fix.go:216] guest clock: 1765935026.155536878
	I1217 01:30:26.159482   55831 fix.go:229] Guest: 2025-12-17 01:30:26.155536878 +0000 UTC Remote: 2025-12-17 01:30:26.032314252 +0000 UTC m=+6.652094255 (delta=123.222626ms)
	I1217 01:30:26.159501   55831 fix.go:200] guest clock delta is within tolerance: 123.222626ms
	I1217 01:30:26.159507   55831 start.go:83] releasing machines lock for "pause-716229", held for 6.658418729s
	I1217 01:30:26.163103   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.163720   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.163776   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.164643   55831 ssh_runner.go:195] Run: cat /version.json
	I1217 01:30:26.164777   55831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:30:26.168104   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168166   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168563   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.168607   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168657   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.168705   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168817   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:26.169069   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:26.315315   55831 ssh_runner.go:195] Run: systemctl --version
	I1217 01:30:26.328723   55831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:30:26.561010   55831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:30:26.585959   55831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:30:26.586076   55831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:30:26.641658   55831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:30:26.641684   55831 start.go:496] detecting cgroup driver to use...
	I1217 01:30:26.641768   55831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:30:26.693772   55831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:30:26.719728   55831 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:30:26.719802   55831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:30:26.765707   55831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:30:26.805381   55831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:30:27.223191   55831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:30:27.618734   55831 docker.go:234] disabling docker service ...
	I1217 01:30:27.618813   55831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:30:27.696855   55831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:30:27.740817   55831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:30:28.152831   55831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:30:28.522103   55831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:30:28.547218   55831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:30:28.592003   55831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:30:28.592089   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.637881   55831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:30:28.637959   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.676778   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.703880   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.732130   55831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:30:28.761140   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.777472   55831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.798808   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.822861   55831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:30:28.842091   55831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:30:28.864630   55831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:29.210820   55831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:39.511179   55831 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.300301609s)
	I1217 01:30:39.511222   55831 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:39.511275   55831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:39.517085   55831 start.go:564] Will wait 60s for crictl version
	I1217 01:30:39.517172   55831 ssh_runner.go:195] Run: which crictl
	I1217 01:30:39.521729   55831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:30:39.615915   55831 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 01:30:39.616011   55831 ssh_runner.go:195] Run: crio --version
	I1217 01:30:39.673265   55831 ssh_runner.go:195] Run: crio --version
	I1217 01:30:39.728526   55831 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1217 01:30:39.733629   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:39.734200   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:39.734246   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:39.734572   55831 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:39.743102   55831 kubeadm.go:884] updating cluster {Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia
-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:30:39.743327   55831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:30:39.743396   55831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:39.901711   55831 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:30:39.901745   55831 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:30:39.901815   55831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:39.983579   55831 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:30:39.983613   55831 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:30:39.983624   55831 kubeadm.go:935] updating node { 192.168.61.9 8443 v1.34.2 crio true true} ...
	I1217 01:30:39.983759   55831 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-716229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:39.983876   55831 ssh_runner.go:195] Run: crio config
	I1217 01:30:40.089652   55831 cni.go:84] Creating CNI manager for ""
	I1217 01:30:40.089693   55831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:40.089712   55831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:30:40.089741   55831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.9 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-716229 NodeName:pause-716229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:30:40.089943   55831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-716229"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:30:40.090113   55831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:40.118423   55831 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:40.118516   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:30:40.140559   55831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1217 01:30:40.182106   55831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:40.207405   55831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 01:30:40.262429   55831 ssh_runner.go:195] Run: grep 192.168.61.9	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:40.281797   55831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:40.620533   55831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:40.649489   55831 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229 for IP: 192.168.61.9
	I1217 01:30:40.649513   55831 certs.go:195] generating shared ca certs ...
	I1217 01:30:40.649530   55831 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:40.649705   55831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 01:30:40.649778   55831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 01:30:40.649806   55831 certs.go:257] generating profile certs ...
	I1217 01:30:40.649956   55831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/client.key
	I1217 01:30:40.650102   55831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.key.9d9987e4
	I1217 01:30:40.650170   55831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.key
	I1217 01:30:40.650357   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 01:30:40.650396   55831 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:40.650405   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 01:30:40.650431   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 01:30:40.650453   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:40.650483   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 01:30:40.650529   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:40.651172   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:40.707541   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:40.769292   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:40.816066   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:40.860727   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 01:30:40.900973   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:40.934536   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:40.970705   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:30:41.004205   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 01:30:41.046143   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:41.083364   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 01:30:41.119367   55831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:30:41.148621   55831 ssh_runner.go:195] Run: openssl version
	I1217 01:30:41.156675   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.172461   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 01:30:41.188236   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.194693   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.194767   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.203719   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:41.220299   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.236087   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 01:30:41.251908   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.258448   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.258512   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.269418   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:41.287477   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.300413   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:41.313271   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.319461   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.319530   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.327881   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:41.344267   55831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:41.350375   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:41.359771   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:41.368879   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:41.377291   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:41.387445   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:41.396912   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:41.405959   55831 kubeadm.go:401] StartCluster: {Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:41.406139   55831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:30:41.406227   55831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:30:41.455737   55831 cri.go:89] found id: "2b923cc6453c863412d08cc49a0d17451f0fe4f0ef72a6c2dae9970574e5668f"
	I1217 01:30:41.455760   55831 cri.go:89] found id: "c6661edccb3b25fc75fc44ed63529a477c27e51decbe411700030f58380f028d"
	I1217 01:30:41.455766   55831 cri.go:89] found id: "4ab70530751bb7195a9e9385ea81c60aca8226c38f366f74a8ade07361033002"
	I1217 01:30:41.455771   55831 cri.go:89] found id: "b7b6956036af3c69a90a6e5dd61d14124fa30850b8ec8db991c70d667888a542"
	I1217 01:30:41.455776   55831 cri.go:89] found id: "8016a5f4fa0b7c8ceda82ce8e8e6d276852bea59b597f635afd89296a9090632"
	I1217 01:30:41.455781   55831 cri.go:89] found id: "a7fbae2a502d7025e987f1bd5ae191db5709dff042be3cf7250d266712f0d834"
	I1217 01:30:41.455785   55831 cri.go:89] found id: ""
	I1217 01:30:41.455835   55831 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-716229 -n pause-716229
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-716229 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-716229 logs -n 25: (1.343693046s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-428588 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                      │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                       │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat docker --no-pager                                                                                                                                                                                       │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /etc/docker/daemon.json                                                                                                                                                                                           │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo docker system info                                                                                                                                                                                                    │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo containerd config dump                                                                                                                                                                                                │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo crio config                                                                                                                                                                                                           │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ delete  │ -p cilium-428588                                                                                                                                                                                                                            │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │ 17 Dec 25 01:29 UTC │
	│ start   │ -p old-k8s-version-625875 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625875 │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │ 17 Dec 25 01:30 UTC │
	│ start   │ -p no-preload-395127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-395127      │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ start   │ -p pause-716229 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-716229           │ jenkins │ v1.37.0 │ 17 Dec 25 01:30 UTC │ 17 Dec 25 01:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-625875 │ jenkins │ v1.37.0 │ 17 Dec 25 01:30 UTC │ 17 Dec 25 01:30 UTC │
	│ stop    │ -p old-k8s-version-625875 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-625875 │ jenkins │ v1.37.0 │ 17 Dec 25 01:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:30:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:30:19.435564   55831 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:30:19.435724   55831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:30:19.435736   55831 out.go:374] Setting ErrFile to fd 2...
	I1217 01:30:19.435742   55831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:30:19.436062   55831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:30:19.436673   55831 out.go:368] Setting JSON to false
	I1217 01:30:19.437941   55831 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7965,"bootTime":1765927054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 01:30:19.437992   55831 start.go:143] virtualization: kvm guest
	I1217 01:30:19.440813   55831 out.go:179] * [pause-716229] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 01:30:19.444039   55831 notify.go:221] Checking for updates...
	I1217 01:30:19.444067   55831 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:30:19.445577   55831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:30:19.447011   55831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:30:19.448616   55831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 01:30:19.449986   55831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 01:30:19.451166   55831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:30:19.453108   55831 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:19.453690   55831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:30:19.494887   55831 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 01:30:19.495979   55831 start.go:309] selected driver: kvm2
	I1217 01:30:19.495995   55831 start.go:927] validating driver "kvm2" against &{Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:19.496174   55831 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:30:19.497122   55831 cni.go:84] Creating CNI manager for ""
	I1217 01:30:19.497194   55831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:19.497254   55831 start.go:353] cluster config:
	{Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:19.497389   55831 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:30:19.499240   55831 out.go:179] * Starting "pause-716229" primary control-plane node in "pause-716229" cluster
	I1217 01:30:19.500444   55831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:30:19.500486   55831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 01:30:19.500499   55831 cache.go:65] Caching tarball of preloaded images
	I1217 01:30:19.500597   55831 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 01:30:19.500610   55831 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:30:19.500750   55831 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/config.json ...
	I1217 01:30:19.500981   55831 start.go:360] acquireMachinesLock for pause-716229: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 01:30:19.501078   55831 start.go:364] duration metric: took 78.023µs to acquireMachinesLock for "pause-716229"
	I1217 01:30:19.501097   55831 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:30:19.501104   55831 fix.go:54] fixHost starting: 
	I1217 01:30:19.503236   55831 fix.go:112] recreateIfNeeded on pause-716229: state=Running err=<nil>
	W1217 01:30:19.503273   55831 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:30:17.918765   55454 ssh_runner.go:195] Run: systemctl --version
	I1217 01:30:17.943400   55454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:30:18.106196   55454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:30:18.113527   55454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:30:18.113614   55454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:30:18.134537   55454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:30:18.134562   55454 start.go:496] detecting cgroup driver to use...
	I1217 01:30:18.134647   55454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:30:18.154213   55454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:30:18.172110   55454 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:30:18.172170   55454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:30:18.191592   55454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:30:18.209930   55454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:30:18.361857   55454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:30:18.581111   55454 docker.go:234] disabling docker service ...
	I1217 01:30:18.581230   55454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:30:18.598567   55454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:30:18.614462   55454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:30:18.782760   55454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:30:18.946218   55454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:30:18.964663   55454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:30:18.988966   55454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:30:18.989047   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.002547   55454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:30:19.002635   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.015876   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.029565   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.043526   55454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:30:19.058113   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.071938   55454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.093094   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.106495   55454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:30:19.120291   55454 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 01:30:19.120356   55454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 01:30:19.143690   55454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:30:19.158984   55454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:19.308060   55454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:19.445792   55454 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:19.445862   55454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:19.452514   55454 start.go:564] Will wait 60s for crictl version
	I1217 01:30:19.452576   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.457325   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:30:19.502208   55454 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 01:30:19.502285   55454 ssh_runner.go:195] Run: crio --version
	I1217 01:30:19.536243   55454 ssh_runner.go:195] Run: crio --version
	I1217 01:30:19.575076   55454 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1217 01:30:16.895058   51086 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I1217 01:30:18.023137   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:18.523091   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:19.022790   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:19.523271   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:20.022748   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:20.523271   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:21.023039   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:21.523098   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:22.023299   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:22.523045   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:19.579495   55454 main.go:143] libmachine: domain no-preload-395127 has defined MAC address 52:54:00:ee:9f:17 in network mk-no-preload-395127
	I1217 01:30:19.579989   55454 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:17", ip: ""} in network mk-no-preload-395127: {Iface:virbr5 ExpiryTime:2025-12-17 02:30:13 +0000 UTC Type:0 Mac:52:54:00:ee:9f:17 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:no-preload-395127 Clientid:01:52:54:00:ee:9f:17}
	I1217 01:30:19.580044   55454 main.go:143] libmachine: domain no-preload-395127 has defined IP address 192.168.83.246 and MAC address 52:54:00:ee:9f:17 in network mk-no-preload-395127
	I1217 01:30:19.580301   55454 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:19.585678   55454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:19.603304   55454 kubeadm.go:884] updating cluster {Name:no-preload-395127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:no-preload-395127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:30:19.603502   55454 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 01:30:19.603559   55454 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:19.638093   55454 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1217 01:30:19.638117   55454 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 01:30:19.638181   55454 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:19.638203   55454 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.638242   55454 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.638257   55454 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.638203   55454 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.638430   55454 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.638449   55454 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.638470   55454 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 01:30:19.640098   55454 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.640324   55454 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.640351   55454 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:19.640476   55454 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.640792   55454 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.641068   55454 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 01:30:19.641609   55454 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.641688   55454 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.760771   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.763269   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.769338   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.784594   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.787268   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 01:30:19.801226   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.812037   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.858294   55454 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1217 01:30:19.858331   55454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.858377   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.961625   55454 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1217 01:30:19.961676   55454 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.961677   55454 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1217 01:30:19.961709   55454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.961718   55454 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1217 01:30:19.961732   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.961740   55454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.961761   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.961774   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.967153   55454 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 01:30:19.967186   55454 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 01:30:19.967228   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.988225   55454 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1217 01:30:19.988267   55454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.988316   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.991260   55454 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 01:30:19.991298   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.991301   55454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.991323   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.991342   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.991388   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.991397   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.991452   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:30:19.999640   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:20.113226   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:20.113226   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:20.115274   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:30:20.115306   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:20.115325   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:20.115278   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:20.115331   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:20.246224   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:20.246304   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:20.246333   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:20.246356   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:20.246363   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:20.246401   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:20.246417   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:30:20.369651   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:20.369696   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1217 01:30:20.369765   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:20.369787   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:30:20.369802   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:20.369831   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:20.369849   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1217 01:30:20.369767   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:20.369769   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:20.369895   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:20.369910   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:30:20.369807   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1217 01:30:20.370026   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 01:30:20.424105   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 01:30:20.424135   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424166   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1217 01:30:20.424170   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1217 01:30:20.424183   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1217 01:30:20.424220   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:30:20.424220   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424282   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1217 01:30:20.424308   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424251   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 01:30:20.424329   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1217 01:30:20.424291   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424382   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1217 01:30:20.424354   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 01:30:20.444543   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 01:30:20.444571   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 01:30:20.546396   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:20.556961   55454 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 01:30:20.557051   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1217 01:30:20.751302   55454 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 01:30:20.751348   55454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:20.751408   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.061521   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:21.061567   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1217 01:30:21.061609   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:21.061675   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:21.208821   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:19.504720   55831 out.go:252] * Updating the running kvm2 "pause-716229" VM ...
	I1217 01:30:19.504749   55831 machine.go:94] provisionDockerMachine start ...
	I1217 01:30:19.507267   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.507772   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.507796   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.507953   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.508187   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.508203   55831 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:30:19.638055   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-716229
	
	I1217 01:30:19.638093   55831 buildroot.go:166] provisioning hostname "pause-716229"
	I1217 01:30:19.643779   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.644347   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.644374   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.644571   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.644819   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.644833   55831 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-716229 && echo "pause-716229" | sudo tee /etc/hostname
	I1217 01:30:19.787607   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-716229
	
	I1217 01:30:19.791566   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.792103   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.792138   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.792331   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.792620   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.792645   55831 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-716229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-716229/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-716229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:30:19.917161   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:30:19.917193   55831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 01:30:19.917223   55831 buildroot.go:174] setting up certificates
	I1217 01:30:19.917236   55831 provision.go:84] configureAuth start
	I1217 01:30:19.920490   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.921061   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.921099   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924101   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924504   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.924527   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924692   55831 provision.go:143] copyHostCerts
	I1217 01:30:19.924752   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem, removing ...
	I1217 01:30:19.924772   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem
	I1217 01:30:19.924841   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 01:30:19.924988   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem, removing ...
	I1217 01:30:19.925002   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem
	I1217 01:30:19.925047   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 01:30:19.925151   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem, removing ...
	I1217 01:30:19.925175   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem
	I1217 01:30:19.925208   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 01:30:19.925360   55831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.pause-716229 san=[127.0.0.1 192.168.61.9 localhost minikube pause-716229]
	I1217 01:30:19.984817   55831 provision.go:177] copyRemoteCerts
	I1217 01:30:19.984903   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:30:19.987915   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.988514   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.988557   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.988798   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:20.085814   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:30:20.125959   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 01:30:20.166703   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:30:20.208381   55831 provision.go:87] duration metric: took 291.127344ms to configureAuth
	I1217 01:30:20.208410   55831 buildroot.go:189] setting minikube options for container-runtime
	I1217 01:30:20.208679   55831 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:20.212425   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:20.212953   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:20.212990   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:20.213266   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:20.213561   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:20.213591   55831 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:30:21.896534   51086 api_server.go:269] stopped: https://192.168.39.33:8443/healthz: Get "https://192.168.39.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 01:30:21.896605   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:30:21.896668   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:30:21.939001   51086 cri.go:89] found id: "f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:21.939045   51086 cri.go:89] found id: "a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e"
	I1217 01:30:21.939052   51086 cri.go:89] found id: ""
	I1217 01:30:21.939061   51086 logs.go:282] 2 containers: [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060 a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e]
	I1217 01:30:21.939136   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.944115   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.948749   51086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:30:21.948831   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:30:21.986091   51086 cri.go:89] found id: "4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:21.986131   51086 cri.go:89] found id: ""
	I1217 01:30:21.986141   51086 logs.go:282] 1 containers: [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23]
	I1217 01:30:21.986213   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.990716   51086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:30:21.990789   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:30:22.030581   51086 cri.go:89] found id: "4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:22.030611   51086 cri.go:89] found id: "af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:22.030617   51086 cri.go:89] found id: ""
	I1217 01:30:22.030627   51086 logs.go:282] 2 containers: [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c]
	I1217 01:30:22.030696   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.035383   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.039755   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:30:22.039838   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:30:22.077306   51086 cri.go:89] found id: "11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:22.077340   51086 cri.go:89] found id: "fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:22.077348   51086 cri.go:89] found id: ""
	I1217 01:30:22.077357   51086 logs.go:282] 2 containers: [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722]
	I1217 01:30:22.077426   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.082080   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.086755   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:30:22.086839   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:30:22.134568   51086 cri.go:89] found id: "2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:22.134592   51086 cri.go:89] found id: ""
	I1217 01:30:22.134602   51086 logs.go:282] 1 containers: [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13]
	I1217 01:30:22.134659   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.140647   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:30:22.140723   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:30:22.182597   51086 cri.go:89] found id: "92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:22.182621   51086 cri.go:89] found id: "889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c"
	I1217 01:30:22.182626   51086 cri.go:89] found id: ""
	I1217 01:30:22.182636   51086 logs.go:282] 2 containers: [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9 889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c]
	I1217 01:30:22.182708   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.187128   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.191716   51086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:30:22.191782   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:30:22.237439   51086 cri.go:89] found id: ""
	I1217 01:30:22.237467   51086 logs.go:282] 0 containers: []
	W1217 01:30:22.237479   51086 logs.go:284] No container was found matching "kindnet"
	I1217 01:30:22.237488   51086 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:30:22.237553   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:30:22.290443   51086 cri.go:89] found id: "04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:22.290535   51086 cri.go:89] found id: "02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:22.290546   51086 cri.go:89] found id: ""
	I1217 01:30:22.290572   51086 logs.go:282] 2 containers: [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24]
	I1217 01:30:22.290640   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.295256   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.300346   51086 logs.go:123] Gathering logs for coredns [af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c] ...
	I1217 01:30:22.300369   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:22.348678   51086 logs.go:123] Gathering logs for kube-scheduler [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b] ...
	I1217 01:30:22.348714   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:22.445622   51086 logs.go:123] Gathering logs for kube-scheduler [fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722] ...
	I1217 01:30:22.445661   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:22.491628   51086 logs.go:123] Gathering logs for kube-controller-manager [889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c] ...
	I1217 01:30:22.491655   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c"
	I1217 01:30:22.538713   51086 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:30:22.538758   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:30:22.891737   51086 logs.go:123] Gathering logs for dmesg ...
	I1217 01:30:22.891773   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:30:22.908994   51086 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:30:22.909037   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 01:30:23.022809   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:23.523273   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:24.023255   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:24.523359   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:25.022434   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:25.522884   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:26.022354   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:26.522298   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:27.022818   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:27.522335   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:23.777589   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.715885236s)
	I1217 01:30:23.777620   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1217 01:30:23.777651   55454 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:30:23.777650   55454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.568791096s)
	I1217 01:30:23.777702   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:30:23.777718   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:26.064481   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.286752732s)
	I1217 01:30:26.064521   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1217 01:30:26.064547   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:26.064608   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:26.064544   55454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.28680286s)
	I1217 01:30:26.064682   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 01:30:26.064775   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:30:28.022847   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:28.226137   55074 kubeadm.go:1114] duration metric: took 12.443127084s to wait for elevateKubeSystemPrivileges
	I1217 01:30:28.226177   55074 kubeadm.go:403] duration metric: took 24.315193301s to StartCluster
	I1217 01:30:28.226196   55074 settings.go:142] acquiring lock: {Name:mk0fa06a6a557f0851b041158306daec92094c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:28.226284   55074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:30:28.227716   55074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/kubeconfig: {Name:mk0867cff530c231805e36a9674d4fe6612173a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:28.228005   55074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:28.228074   55074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:30:28.228225   55074 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-625875"
	I1217 01:30:28.228244   55074 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-625875"
	I1217 01:30:28.228244   55074 config.go:182] Loaded profile config "old-k8s-version-625875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 01:30:28.228281   55074 host.go:66] Checking if "old-k8s-version-625875" exists ...
	I1217 01:30:28.228053   55074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 01:30:28.228375   55074 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-625875"
	I1217 01:30:28.228410   55074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-625875"
	I1217 01:30:28.229913   55074 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:28.232530   55074 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-625875"
	I1217 01:30:28.232567   55074 host.go:66] Checking if "old-k8s-version-625875" exists ...
	I1217 01:30:28.233234   55074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:25.882124   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:30:25.882158   55831 machine.go:97] duration metric: took 6.37739993s to provisionDockerMachine
	I1217 01:30:25.882173   55831 start.go:293] postStartSetup for "pause-716229" (driver="kvm2")
	I1217 01:30:25.882210   55831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:30:25.882298   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:30:25.886127   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:25.886654   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:25.886683   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:25.886836   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:25.981319   55831 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:30:25.986394   55831 info.go:137] Remote host: Buildroot 2025.02
	I1217 01:30:25.986418   55831 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 01:30:25.986487   55831 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 01:30:25.986592   55831 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem -> 170742.pem in /etc/ssl/certs
	I1217 01:30:25.986710   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:30:25.999107   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:26.032262   55831 start.go:296] duration metric: took 150.072721ms for postStartSetup
	I1217 01:30:26.032309   55831 fix.go:56] duration metric: took 6.531204073s for fixHost
	I1217 01:30:26.035448   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.035824   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.035847   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.036044   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:26.036305   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:26.036319   55831 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 01:30:26.159443   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765935026.155536878
	
	I1217 01:30:26.159474   55831 fix.go:216] guest clock: 1765935026.155536878
	I1217 01:30:26.159482   55831 fix.go:229] Guest: 2025-12-17 01:30:26.155536878 +0000 UTC Remote: 2025-12-17 01:30:26.032314252 +0000 UTC m=+6.652094255 (delta=123.222626ms)
	I1217 01:30:26.159501   55831 fix.go:200] guest clock delta is within tolerance: 123.222626ms
	I1217 01:30:26.159507   55831 start.go:83] releasing machines lock for "pause-716229", held for 6.658418729s
	I1217 01:30:26.163103   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.163720   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.163776   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.164643   55831 ssh_runner.go:195] Run: cat /version.json
	I1217 01:30:26.164777   55831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:30:26.168104   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168166   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168563   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.168607   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168657   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.168705   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168817   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:26.169069   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:26.315315   55831 ssh_runner.go:195] Run: systemctl --version
	I1217 01:30:26.328723   55831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:30:26.561010   55831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:30:26.585959   55831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:30:26.586076   55831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:30:26.641658   55831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:30:26.641684   55831 start.go:496] detecting cgroup driver to use...
	I1217 01:30:26.641768   55831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:30:26.693772   55831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:30:26.719728   55831 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:30:26.719802   55831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:30:26.765707   55831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:30:26.805381   55831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:30:27.223191   55831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:30:27.618734   55831 docker.go:234] disabling docker service ...
	I1217 01:30:27.618813   55831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:30:27.696855   55831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:30:27.740817   55831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:30:28.152831   55831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:30:28.522103   55831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:30:28.547218   55831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:30:28.592003   55831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:30:28.592089   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.637881   55831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:30:28.637959   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.676778   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.703880   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.732130   55831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:30:28.761140   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.777472   55831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.798808   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.822861   55831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:30:28.842091   55831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:30:28.864630   55831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:29.210820   55831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:28.233405   55074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:28.234435   55074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 01:30:28.234455   55074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 01:30:28.234642   55074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:30:28.234657   55074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 01:30:28.239255   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.239783   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.239826   55074 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:10:92", ip: ""} in network mk-old-k8s-version-625875: {Iface:virbr4 ExpiryTime:2025-12-17 02:29:51 +0000 UTC Type:0 Mac:52:54:00:dd:10:92 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:old-k8s-version-625875 Clientid:01:52:54:00:dd:10:92}
	I1217 01:30:28.239864   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined IP address 192.168.72.223 and MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.240468   55074 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/old-k8s-version-625875/id_rsa Username:docker}
	I1217 01:30:28.241049   55074 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:10:92", ip: ""} in network mk-old-k8s-version-625875: {Iface:virbr4 ExpiryTime:2025-12-17 02:29:51 +0000 UTC Type:0 Mac:52:54:00:dd:10:92 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:old-k8s-version-625875 Clientid:01:52:54:00:dd:10:92}
	I1217 01:30:28.241085   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined IP address 192.168.72.223 and MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.241610   55074 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/old-k8s-version-625875/id_rsa Username:docker}
	I1217 01:30:28.415211   55074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 01:30:28.564576   55074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:28.745229   55074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 01:30:28.892347   55074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:30:30.293922   55074 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.878664512s)
	I1217 01:30:30.293960   55074 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1217 01:30:30.293976   55074 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.729336989s)
	I1217 01:30:30.294042   55074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.548756208s)
	I1217 01:30:30.295294   55074 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-625875" to be "Ready" ...
	I1217 01:30:30.320339   55074 node_ready.go:49] node "old-k8s-version-625875" is "Ready"
	I1217 01:30:30.320366   55074 node_ready.go:38] duration metric: took 25.031141ms for node "old-k8s-version-625875" to be "Ready" ...
	I1217 01:30:30.320381   55074 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:30.320451   55074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.783877   55074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.891482625s)
	I1217 01:30:30.783882   55074 api_server.go:72] duration metric: took 2.555828475s to wait for apiserver process to appear ...
	I1217 01:30:30.783964   55074 api_server.go:88] waiting for apiserver healthz status ...
	I1217 01:30:30.784004   55074 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I1217 01:30:30.786946   55074 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 01:30:30.788431   55074 addons.go:530] duration metric: took 2.56035709s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 01:30:30.795203   55074 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I1217 01:30:30.799771   55074 api_server.go:141] control plane version: v1.28.0
	I1217 01:30:30.799807   55074 api_server.go:131] duration metric: took 15.821862ms to wait for apiserver health ...
	I1217 01:30:30.799822   55074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:30:30.801046   55074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-625875" context rescaled to 1 replicas
	I1217 01:30:30.835925   55074 system_pods.go:59] 8 kube-system pods found
	I1217 01:30:30.835968   55074 system_pods.go:61] "coredns-5dd5756b68-d46w4" [fed3a512-ea6c-4689-bd7b-e20329782c19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.835980   55074 system_pods.go:61] "coredns-5dd5756b68-zrj9b" [b062fb48-19a3-4f1b-8b82-ee4be095f5be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.835989   55074 system_pods.go:61] "etcd-old-k8s-version-625875" [fc2b860d-240b-4306-b025-cf994e2e27e1] Running
	I1217 01:30:30.835996   55074 system_pods.go:61] "kube-apiserver-old-k8s-version-625875" [230920a9-2718-4952-8d7d-54244dbf3677] Running
	I1217 01:30:30.836001   55074 system_pods.go:61] "kube-controller-manager-old-k8s-version-625875" [2dd1a95b-0eb2-4084-9b77-e8ad47662e1a] Running
	I1217 01:30:30.836007   55074 system_pods.go:61] "kube-proxy-knddz" [851cb2e5-3111-4b21-9295-ffb6800af552] Running
	I1217 01:30:30.836015   55074 system_pods.go:61] "kube-scheduler-old-k8s-version-625875" [953e65e2-9fd5-456a-b0a2-d24b0c7c5945] Running
	I1217 01:30:30.836063   55074 system_pods.go:61] "storage-provisioner" [c0f7895a-51be-400b-bc50-2ade62ea8883] Pending
	I1217 01:30:30.836072   55074 system_pods.go:74] duration metric: took 36.241884ms to wait for pod list to return data ...
	I1217 01:30:30.836082   55074 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:30:30.852150   55074 default_sa.go:45] found service account: "default"
	I1217 01:30:30.852181   55074 default_sa.go:55] duration metric: took 16.088177ms for default service account to be created ...
	I1217 01:30:30.852194   55074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:30:30.869152   55074 system_pods.go:86] 8 kube-system pods found
	I1217 01:30:30.869192   55074 system_pods.go:89] "coredns-5dd5756b68-d46w4" [fed3a512-ea6c-4689-bd7b-e20329782c19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.869202   55074 system_pods.go:89] "coredns-5dd5756b68-zrj9b" [b062fb48-19a3-4f1b-8b82-ee4be095f5be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.869211   55074 system_pods.go:89] "etcd-old-k8s-version-625875" [fc2b860d-240b-4306-b025-cf994e2e27e1] Running
	I1217 01:30:30.869219   55074 system_pods.go:89] "kube-apiserver-old-k8s-version-625875" [230920a9-2718-4952-8d7d-54244dbf3677] Running
	I1217 01:30:30.869224   55074 system_pods.go:89] "kube-controller-manager-old-k8s-version-625875" [2dd1a95b-0eb2-4084-9b77-e8ad47662e1a] Running
	I1217 01:30:30.869230   55074 system_pods.go:89] "kube-proxy-knddz" [851cb2e5-3111-4b21-9295-ffb6800af552] Running
	I1217 01:30:30.869235   55074 system_pods.go:89] "kube-scheduler-old-k8s-version-625875" [953e65e2-9fd5-456a-b0a2-d24b0c7c5945] Running
	I1217 01:30:30.869243   55074 system_pods.go:89] "storage-provisioner" [c0f7895a-51be-400b-bc50-2ade62ea8883] Pending
	I1217 01:30:30.869271   55074 retry.go:31] will retry after 285.428706ms: missing components: kube-dns
	I1217 01:30:31.178416   55074 system_pods.go:86] 8 kube-system pods found
	I1217 01:30:31.178470   55074 system_pods.go:89] "coredns-5dd5756b68-d46w4" [fed3a512-ea6c-4689-bd7b-e20329782c19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:31.178481   55074 system_pods.go:89] "coredns-5dd5756b68-zrj9b" [b062fb48-19a3-4f1b-8b82-ee4be095f5be] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I1217 01:30:31.178489   55074 system_pods.go:89] "etcd-old-k8s-version-625875" [fc2b860d-240b-4306-b025-cf994e2e27e1] Running
	I1217 01:30:31.178494   55074 system_pods.go:89] "kube-apiserver-old-k8s-version-625875" [230920a9-2718-4952-8d7d-54244dbf3677] Running
	I1217 01:30:31.178499   55074 system_pods.go:89] "kube-controller-manager-old-k8s-version-625875" [2dd1a95b-0eb2-4084-9b77-e8ad47662e1a] Running
	I1217 01:30:31.178504   55074 system_pods.go:89] "kube-proxy-knddz" [851cb2e5-3111-4b21-9295-ffb6800af552] Running
	I1217 01:30:31.178509   55074 system_pods.go:89] "kube-scheduler-old-k8s-version-625875" [953e65e2-9fd5-456a-b0a2-d24b0c7c5945] Running
	I1217 01:30:31.178516   55074 system_pods.go:89] "storage-provisioner" [c0f7895a-51be-400b-bc50-2ade62ea8883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:30:31.178526   55074 system_pods.go:126] duration metric: took 326.324606ms to wait for k8s-apps to be running ...
	I1217 01:30:31.178537   55074 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:30:31.178590   55074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:30:31.203345   55074 system_svc.go:56] duration metric: took 24.794791ms WaitForService to wait for kubelet
	I1217 01:30:31.203381   55074 kubeadm.go:587] duration metric: took 2.975329276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:30:31.203425   55074 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:30:31.206446   55074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 01:30:31.206482   55074 node_conditions.go:123] node cpu capacity is 2
	I1217 01:30:31.206500   55074 node_conditions.go:105] duration metric: took 3.068585ms to run NodePressure ...
	I1217 01:30:31.206516   55074 start.go:242] waiting for startup goroutines ...
	I1217 01:30:31.206530   55074 start.go:247] waiting for cluster config update ...
	I1217 01:30:31.206543   55074 start.go:256] writing updated cluster config ...
	I1217 01:30:31.206863   55074 ssh_runner.go:195] Run: rm -f paused
	I1217 01:30:31.214476   55074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:30:31.220146   55074 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-d46w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.226293   55074 pod_ready.go:94] pod "coredns-5dd5756b68-d46w4" is "Ready"
	I1217 01:30:32.226320   55074 pod_ready.go:86] duration metric: took 1.006144632s for pod "coredns-5dd5756b68-d46w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.226331   55074 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.229552   55074 pod_ready.go:99] pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace is gone: getting pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace (will retry): pods "coredns-5dd5756b68-zrj9b" not found
	I1217 01:30:32.229570   55074 pod_ready.go:86] duration metric: took 3.232306ms for pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.233254   55074 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.239569   55074 pod_ready.go:94] pod "etcd-old-k8s-version-625875" is "Ready"
	I1217 01:30:32.239600   55074 pod_ready.go:86] duration metric: took 6.320312ms for pod "etcd-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.244167   55074 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.252968   55074 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-625875" is "Ready"
	I1217 01:30:32.252992   55074 pod_ready.go:86] duration metric: took 8.802019ms for pod "kube-apiserver-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.259974   55074 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.624417   55074 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-625875" is "Ready"
	I1217 01:30:32.624459   55074 pod_ready.go:86] duration metric: took 364.460199ms for pod "kube-controller-manager-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:28.032357   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.967721827s)
	I1217 01:30:28.032400   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1217 01:30:28.032425   55454 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:30:28.032450   55454 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.967630397s)
	I1217 01:30:28.032477   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:30:28.032484   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 01:30:28.032513   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1217 01:30:30.343204   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.310681804s)
	I1217 01:30:30.343254   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 01:30:30.343296   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:30.343391   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:32.110834   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.767419827s)
	I1217 01:30:32.110866   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1217 01:30:32.110910   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:30:32.110971   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:30:32.829000   55074 pod_ready.go:83] waiting for pod "kube-proxy-knddz" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.224703   55074 pod_ready.go:94] pod "kube-proxy-knddz" is "Ready"
	I1217 01:30:33.224737   55074 pod_ready.go:86] duration metric: took 395.691073ms for pod "kube-proxy-knddz" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.425807   55074 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.823856   55074 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-625875" is "Ready"
	I1217 01:30:33.823882   55074 pod_ready.go:86] duration metric: took 398.049592ms for pod "kube-scheduler-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.823895   55074 pod_ready.go:40] duration metric: took 2.609377607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:30:33.877121   55074 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 01:30:33.928163   55074 out.go:203] 
	W1217 01:30:33.929163   55074 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 01:30:33.930598   55074 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 01:30:33.932525   55074 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-625875" cluster and "default" namespace by default
	I1217 01:30:32.997065   51086 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.088003038s)
	W1217 01:30:32.997111   51086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 01:30:32.997120   51086 logs.go:123] Gathering logs for kube-apiserver [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060] ...
	I1217 01:30:32.997133   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:33.054723   51086 logs.go:123] Gathering logs for etcd [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23] ...
	I1217 01:30:33.054757   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:33.101351   51086 logs.go:123] Gathering logs for coredns [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb] ...
	I1217 01:30:33.101381   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:33.161254   51086 logs.go:123] Gathering logs for storage-provisioner [02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24] ...
	I1217 01:30:33.161286   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:33.210096   51086 logs.go:123] Gathering logs for kubelet ...
	I1217 01:30:33.210135   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:30:33.322873   51086 logs.go:123] Gathering logs for kube-proxy [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13] ...
	I1217 01:30:33.322910   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:33.376240   51086 logs.go:123] Gathering logs for kube-apiserver [a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e] ...
	I1217 01:30:33.376269   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e"
	I1217 01:30:33.421703   51086 logs.go:123] Gathering logs for kube-controller-manager [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9] ...
	I1217 01:30:33.421743   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:33.461816   51086 logs.go:123] Gathering logs for storage-provisioner [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274] ...
	I1217 01:30:33.461843   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:33.510339   51086 logs.go:123] Gathering logs for container status ...
	I1217 01:30:33.510374   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:30:34.019660   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.9086575s)
	I1217 01:30:34.019703   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1217 01:30:34.019738   55454 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:30:34.019801   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:30:34.776845   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 01:30:34.776902   55454 cache_images.go:125] Successfully loaded all cached images
	I1217 01:30:34.776909   55454 cache_images.go:94] duration metric: took 15.13877712s to LoadCachedImages
	I1217 01:30:34.776926   55454 kubeadm.go:935] updating node { 192.168.83.246 8443 v1.35.0-beta.0 crio true true} ...
	I1217 01:30:34.777054   55454 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-395127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-395127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:34.777155   55454 ssh_runner.go:195] Run: crio config
	I1217 01:30:34.830207   55454 cni.go:84] Creating CNI manager for ""
	I1217 01:30:34.830235   55454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:34.830253   55454 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:30:34.830277   55454 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-395127 NodeName:no-preload-395127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:30:34.830392   55454 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-395127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:30:34.830467   55454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:30:34.843265   55454 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1217 01:30:34.843322   55454 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:30:34.855911   55454 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1217 01:30:34.855999   55454 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1217 01:30:34.856013   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1217 01:30:34.856097   55454 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1217 01:30:34.861550   55454 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1217 01:30:34.861574   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1217 01:30:35.776063   55454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:30:35.793356   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1217 01:30:35.798423   55454 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1217 01:30:35.798461   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1217 01:30:35.894607   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1217 01:30:35.902485   55454 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1217 01:30:35.902533   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1217 01:30:36.398407   55454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:30:36.412721   55454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1217 01:30:36.441530   55454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:30:36.468394   55454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 01:30:36.492467   55454 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:36.497905   55454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:36.518712   55454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:36.694158   55454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:36.724440   55454 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127 for IP: 192.168.83.246
	I1217 01:30:36.724465   55454 certs.go:195] generating shared ca certs ...
	I1217 01:30:36.724486   55454 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.724683   55454 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 01:30:36.724756   55454 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 01:30:36.724773   55454 certs.go:257] generating profile certs ...
	I1217 01:30:36.724862   55454 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.key
	I1217 01:30:36.724881   55454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt with IP's: []
	I1217 01:30:36.751766   55454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt ...
	I1217 01:30:36.751806   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: {Name:mkd427468268a7fd4ff3ed24fee2d61ff6038b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.752110   55454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.key ...
	I1217 01:30:36.752138   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.key: {Name:mk10e663a87f7a4413195cb7b6ea70cdafef6e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.752288   55454 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a
	I1217 01:30:36.752318   55454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.246]
	I1217 01:30:36.849835   55454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a ...
	I1217 01:30:36.849861   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a: {Name:mk62f564f24f1af6c95e8e9a784ebc7ccfcac644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.850174   55454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a ...
	I1217 01:30:36.850194   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a: {Name:mk4a9d85ca062dc4a05f2714b655be2778d32e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.850302   55454 certs.go:382] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt
	I1217 01:30:36.850403   55454 certs.go:386] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key
	I1217 01:30:36.850479   55454 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key
	I1217 01:30:36.850496   55454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt with IP's: []
	I1217 01:30:36.895726   55454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt ...
	I1217 01:30:36.895753   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt: {Name:mkeaac75e9f3a41f3997c4b91dc787a6f4fe703e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.895936   55454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key ...
	I1217 01:30:36.895956   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key: {Name:mk677012196e2c8d5d593091aa91eb449a55dbf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.896226   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 01:30:36.896278   55454 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:36.896296   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 01:30:36.896329   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 01:30:36.896361   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:36.896407   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 01:30:36.896473   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:36.897051   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:36.933336   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:36.969808   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:37.005731   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:37.040631   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:30:37.074542   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:37.107663   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:37.139407   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:30:37.174373   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 01:30:37.208403   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 01:30:37.245797   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:37.281981   55454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:30:37.307979   55454 ssh_runner.go:195] Run: openssl version
	I1217 01:30:37.315190   55454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.327810   55454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 01:30:37.340479   55454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.347676   55454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.347778   55454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.356219   55454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:37.368216   55454 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17074.pem /etc/ssl/certs/51391683.0
	I1217 01:30:37.380940   55454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.395106   55454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 01:30:37.411341   55454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.420147   55454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.420221   55454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.430632   55454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:37.446229   55454 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/170742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:37.461407   55454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.475964   55454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:37.493415   55454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.500230   55454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.500309   55454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.510547   55454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:37.526312   55454 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:30:37.539865   55454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:37.545417   55454 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:30:37.545484   55454 kubeadm.go:401] StartCluster: {Name:no-preload-395127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:no-preload-395127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:37.545577   55454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:30:37.545631   55454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:30:37.589462   55454 cri.go:89] found id: ""
	I1217 01:30:37.589534   55454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:30:37.605156   55454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:30:37.619956   55454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:30:37.632973   55454 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:30:37.632997   55454 kubeadm.go:158] found existing configuration files:
	
	I1217 01:30:37.633063   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:30:37.645319   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:30:37.645395   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:30:37.658341   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:30:37.671967   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:30:37.672066   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:30:37.686062   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:30:37.698805   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:30:37.698877   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:30:37.712943   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:30:37.725482   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:30:37.725539   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:30:37.740564   55454 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 01:30:39.511179   55831 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.300301609s)
	I1217 01:30:39.511222   55831 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:39.511275   55831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:39.517085   55831 start.go:564] Will wait 60s for crictl version
	I1217 01:30:39.517172   55831 ssh_runner.go:195] Run: which crictl
	I1217 01:30:39.521729   55831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:30:39.615915   55831 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 01:30:39.616011   55831 ssh_runner.go:195] Run: crio --version
	I1217 01:30:39.673265   55831 ssh_runner.go:195] Run: crio --version
	I1217 01:30:39.728526   55831 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1217 01:30:36.068083   51086 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I1217 01:30:36.069202   51086 api_server.go:269] stopped: https://192.168.39.33:8443/healthz: Get "https://192.168.39.33:8443/healthz": dial tcp 192.168.39.33:8443: connect: connection refused
	I1217 01:30:36.069329   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:30:36.069433   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:30:36.128104   51086 cri.go:89] found id: "f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:36.128138   51086 cri.go:89] found id: ""
	I1217 01:30:36.128150   51086 logs.go:282] 1 containers: [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060]
	I1217 01:30:36.128223   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.135012   51086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:30:36.135127   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:30:36.191894   51086 cri.go:89] found id: "4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:36.191921   51086 cri.go:89] found id: ""
	I1217 01:30:36.191933   51086 logs.go:282] 1 containers: [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23]
	I1217 01:30:36.191999   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.197585   51086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:30:36.197684   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:30:36.256920   51086 cri.go:89] found id: "4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:36.256951   51086 cri.go:89] found id: "af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:36.256957   51086 cri.go:89] found id: ""
	I1217 01:30:36.256965   51086 logs.go:282] 2 containers: [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c]
	I1217 01:30:36.257043   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.262692   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.267924   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:30:36.268002   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:30:36.315004   51086 cri.go:89] found id: "11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:36.315047   51086 cri.go:89] found id: "fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:36.315055   51086 cri.go:89] found id: ""
	I1217 01:30:36.315065   51086 logs.go:282] 2 containers: [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722]
	I1217 01:30:36.315138   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.321060   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.327363   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:30:36.327452   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:30:36.389688   51086 cri.go:89] found id: "2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:36.389718   51086 cri.go:89] found id: ""
	I1217 01:30:36.389727   51086 logs.go:282] 1 containers: [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13]
	I1217 01:30:36.389793   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.394620   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:30:36.394710   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:30:36.439277   51086 cri.go:89] found id: "92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:36.439305   51086 cri.go:89] found id: ""
	I1217 01:30:36.439314   51086 logs.go:282] 1 containers: [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9]
	I1217 01:30:36.439368   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.444552   51086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:30:36.444654   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:30:36.489054   51086 cri.go:89] found id: ""
	I1217 01:30:36.489087   51086 logs.go:282] 0 containers: []
	W1217 01:30:36.489095   51086 logs.go:284] No container was found matching "kindnet"
	I1217 01:30:36.489101   51086 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:30:36.489157   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:30:36.529476   51086 cri.go:89] found id: "04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:36.529502   51086 cri.go:89] found id: "02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:36.529508   51086 cri.go:89] found id: ""
	I1217 01:30:36.529517   51086 logs.go:282] 2 containers: [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24]
	I1217 01:30:36.529582   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.534695   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.539112   51086 logs.go:123] Gathering logs for etcd [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23] ...
	I1217 01:30:36.539146   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:36.585461   51086 logs.go:123] Gathering logs for kube-scheduler [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b] ...
	I1217 01:30:36.585494   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:36.682813   51086 logs.go:123] Gathering logs for kube-scheduler [fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722] ...
	I1217 01:30:36.682856   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:36.729668   51086 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:30:36.729698   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:30:37.163566   51086 logs.go:123] Gathering logs for coredns [af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c] ...
	I1217 01:30:37.163612   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:37.201517   51086 logs.go:123] Gathering logs for kubelet ...
	I1217 01:30:37.201568   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:30:37.307741   51086 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:30:37.307780   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:30:37.383323   51086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:30:37.383358   51086 logs.go:123] Gathering logs for kube-apiserver [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060] ...
	I1217 01:30:37.383375   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:37.427800   51086 logs.go:123] Gathering logs for coredns [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb] ...
	I1217 01:30:37.427830   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:37.492093   51086 logs.go:123] Gathering logs for kube-proxy [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13] ...
	I1217 01:30:37.492156   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:37.537089   51086 logs.go:123] Gathering logs for kube-controller-manager [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9] ...
	I1217 01:30:37.537127   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:37.584499   51086 logs.go:123] Gathering logs for storage-provisioner [02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24] ...
	I1217 01:30:37.584533   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:37.632249   51086 logs.go:123] Gathering logs for container status ...
	I1217 01:30:37.632291   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:30:37.682220   51086 logs.go:123] Gathering logs for dmesg ...
	I1217 01:30:37.682253   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:30:37.700732   51086 logs.go:123] Gathering logs for storage-provisioner [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274] ...
	I1217 01:30:37.700768   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:37.979268   55454 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:30:39.733629   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:39.734200   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:39.734246   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:39.734572   55831 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:39.743102   55831 kubeadm.go:884] updating cluster {Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia
-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:30:39.743327   55831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:30:39.743396   55831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:39.901711   55831 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:30:39.901745   55831 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:30:39.901815   55831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:39.983579   55831 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:30:39.983613   55831 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:30:39.983624   55831 kubeadm.go:935] updating node { 192.168.61.9 8443 v1.34.2 crio true true} ...
	I1217 01:30:39.983759   55831 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-716229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:39.983876   55831 ssh_runner.go:195] Run: crio config
	I1217 01:30:40.089652   55831 cni.go:84] Creating CNI manager for ""
	I1217 01:30:40.089693   55831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:40.089712   55831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:30:40.089741   55831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.9 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-716229 NodeName:pause-716229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:30:40.089943   55831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-716229"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:30:40.090113   55831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:40.118423   55831 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:40.118516   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:30:40.140559   55831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1217 01:30:40.182106   55831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:40.207405   55831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 01:30:40.262429   55831 ssh_runner.go:195] Run: grep 192.168.61.9	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:40.281797   55831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:40.620533   55831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:40.649489   55831 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229 for IP: 192.168.61.9
	I1217 01:30:40.649513   55831 certs.go:195] generating shared ca certs ...
	I1217 01:30:40.649530   55831 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:40.649705   55831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 01:30:40.649778   55831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 01:30:40.649806   55831 certs.go:257] generating profile certs ...
	I1217 01:30:40.649956   55831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/client.key
	I1217 01:30:40.650102   55831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.key.9d9987e4
	I1217 01:30:40.650170   55831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.key
	I1217 01:30:40.650357   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 01:30:40.650396   55831 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:40.650405   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 01:30:40.650431   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 01:30:40.650453   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:40.650483   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 01:30:40.650529   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:40.651172   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:40.707541   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:40.769292   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:40.816066   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:40.860727   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 01:30:40.900973   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:40.934536   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:40.970705   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:30:41.004205   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 01:30:41.046143   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:41.083364   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 01:30:41.119367   55831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:30:41.148621   55831 ssh_runner.go:195] Run: openssl version
	I1217 01:30:41.156675   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.172461   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 01:30:41.188236   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.194693   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.194767   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.203719   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:41.220299   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.236087   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 01:30:41.251908   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.258448   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.258512   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.269418   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:41.287477   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.300413   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:41.313271   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.319461   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.319530   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.327881   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:41.344267   55831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:41.350375   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:41.359771   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:41.368879   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:41.377291   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:41.387445   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:41.396912   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:41.405959   55831 kubeadm.go:401] StartCluster: {Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:41.406139   55831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:30:41.406227   55831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:30:41.455737   55831 cri.go:89] found id: "2b923cc6453c863412d08cc49a0d17451f0fe4f0ef72a6c2dae9970574e5668f"
	I1217 01:30:41.455760   55831 cri.go:89] found id: "c6661edccb3b25fc75fc44ed63529a477c27e51decbe411700030f58380f028d"
	I1217 01:30:41.455766   55831 cri.go:89] found id: "4ab70530751bb7195a9e9385ea81c60aca8226c38f366f74a8ade07361033002"
	I1217 01:30:41.455771   55831 cri.go:89] found id: "b7b6956036af3c69a90a6e5dd61d14124fa30850b8ec8db991c70d667888a542"
	I1217 01:30:41.455776   55831 cri.go:89] found id: "8016a5f4fa0b7c8ceda82ce8e8e6d276852bea59b597f635afd89296a9090632"
	I1217 01:30:41.455781   55831 cri.go:89] found id: "a7fbae2a502d7025e987f1bd5ae191db5709dff042be3cf7250d266712f0d834"
	I1217 01:30:41.455785   55831 cri.go:89] found id: ""
	I1217 01:30:41.455835   55831 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-716229 -n pause-716229
helpers_test.go:270: (dbg) Run:  kubectl --context pause-716229 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-716229 -n pause-716229
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-716229 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-716229 logs -n 25: (1.38520996s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-428588 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                      │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                       │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat docker --no-pager                                                                                                                                                                                       │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /etc/docker/daemon.json                                                                                                                                                                                           │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo docker system info                                                                                                                                                                                                    │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo containerd config dump                                                                                                                                                                                                │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ ssh     │ -p cilium-428588 sudo crio config                                                                                                                                                                                                           │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ delete  │ -p cilium-428588                                                                                                                                                                                                                            │ cilium-428588          │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │ 17 Dec 25 01:29 UTC │
	│ start   │ -p old-k8s-version-625875 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625875 │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │ 17 Dec 25 01:30 UTC │
	│ start   │ -p no-preload-395127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                                                                                │ no-preload-395127      │ jenkins │ v1.37.0 │ 17 Dec 25 01:29 UTC │                     │
	│ start   │ -p pause-716229 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-716229           │ jenkins │ v1.37.0 │ 17 Dec 25 01:30 UTC │ 17 Dec 25 01:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ old-k8s-version-625875 │ jenkins │ v1.37.0 │ 17 Dec 25 01:30 UTC │ 17 Dec 25 01:30 UTC │
	│ stop    │ -p old-k8s-version-625875 --alsologtostderr -v=3                                                                                                                                                                                            │ old-k8s-version-625875 │ jenkins │ v1.37.0 │ 17 Dec 25 01:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:30:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:30:19.435564   55831 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:30:19.435724   55831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:30:19.435736   55831 out.go:374] Setting ErrFile to fd 2...
	I1217 01:30:19.435742   55831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:30:19.436062   55831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:30:19.436673   55831 out.go:368] Setting JSON to false
	I1217 01:30:19.437941   55831 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7965,"bootTime":1765927054,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 01:30:19.437992   55831 start.go:143] virtualization: kvm guest
	I1217 01:30:19.440813   55831 out.go:179] * [pause-716229] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 01:30:19.444039   55831 notify.go:221] Checking for updates...
	I1217 01:30:19.444067   55831 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:30:19.445577   55831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:30:19.447011   55831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:30:19.448616   55831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 01:30:19.449986   55831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 01:30:19.451166   55831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:30:19.453108   55831 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:19.453690   55831 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:30:19.494887   55831 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 01:30:19.495979   55831 start.go:309] selected driver: kvm2
	I1217 01:30:19.495995   55831 start.go:927] validating driver "kvm2" against &{Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-install
er:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:19.496174   55831 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:30:19.497122   55831 cni.go:84] Creating CNI manager for ""
	I1217 01:30:19.497194   55831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:19.497254   55831 start.go:353] cluster config:
	{Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false po
rtainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:19.497389   55831 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:30:19.499240   55831 out.go:179] * Starting "pause-716229" primary control-plane node in "pause-716229" cluster
	I1217 01:30:19.500444   55831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:30:19.500486   55831 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1217 01:30:19.500499   55831 cache.go:65] Caching tarball of preloaded images
	I1217 01:30:19.500597   55831 preload.go:238] Found /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 01:30:19.500610   55831 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1217 01:30:19.500750   55831 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/config.json ...
	I1217 01:30:19.500981   55831 start.go:360] acquireMachinesLock for pause-716229: {Name:mke100036b6b648b2e8844ce094d9d979b4c8eb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1217 01:30:19.501078   55831 start.go:364] duration metric: took 78.023µs to acquireMachinesLock for "pause-716229"
	I1217 01:30:19.501097   55831 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:30:19.501104   55831 fix.go:54] fixHost starting: 
	I1217 01:30:19.503236   55831 fix.go:112] recreateIfNeeded on pause-716229: state=Running err=<nil>
	W1217 01:30:19.503273   55831 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:30:17.918765   55454 ssh_runner.go:195] Run: systemctl --version
	I1217 01:30:17.943400   55454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:30:18.106196   55454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:30:18.113527   55454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:30:18.113614   55454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:30:18.134537   55454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:30:18.134562   55454 start.go:496] detecting cgroup driver to use...
	I1217 01:30:18.134647   55454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:30:18.154213   55454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:30:18.172110   55454 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:30:18.172170   55454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:30:18.191592   55454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:30:18.209930   55454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:30:18.361857   55454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:30:18.581111   55454 docker.go:234] disabling docker service ...
	I1217 01:30:18.581230   55454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:30:18.598567   55454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:30:18.614462   55454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:30:18.782760   55454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:30:18.946218   55454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:30:18.964663   55454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:30:18.988966   55454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:30:18.989047   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.002547   55454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:30:19.002635   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.015876   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.029565   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.043526   55454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:30:19.058113   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.071938   55454 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.093094   55454 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:19.106495   55454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:30:19.120291   55454 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1217 01:30:19.120356   55454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1217 01:30:19.143690   55454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:30:19.158984   55454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:19.308060   55454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:19.445792   55454 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:19.445862   55454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:19.452514   55454 start.go:564] Will wait 60s for crictl version
	I1217 01:30:19.452576   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.457325   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:30:19.502208   55454 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 01:30:19.502285   55454 ssh_runner.go:195] Run: crio --version
	I1217 01:30:19.536243   55454 ssh_runner.go:195] Run: crio --version
	I1217 01:30:19.575076   55454 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.29.1 ...
	I1217 01:30:16.895058   51086 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I1217 01:30:18.023137   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:18.523091   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:19.022790   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:19.523271   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:20.022748   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:20.523271   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:21.023039   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:21.523098   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:22.023299   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:22.523045   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:19.579495   55454 main.go:143] libmachine: domain no-preload-395127 has defined MAC address 52:54:00:ee:9f:17 in network mk-no-preload-395127
	I1217 01:30:19.579989   55454 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:9f:17", ip: ""} in network mk-no-preload-395127: {Iface:virbr5 ExpiryTime:2025-12-17 02:30:13 +0000 UTC Type:0 Mac:52:54:00:ee:9f:17 Iaid: IPaddr:192.168.83.246 Prefix:24 Hostname:no-preload-395127 Clientid:01:52:54:00:ee:9f:17}
	I1217 01:30:19.580044   55454 main.go:143] libmachine: domain no-preload-395127 has defined IP address 192.168.83.246 and MAC address 52:54:00:ee:9f:17 in network mk-no-preload-395127
	I1217 01:30:19.580301   55454 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:19.585678   55454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:19.603304   55454 kubeadm.go:884] updating cluster {Name:no-preload-395127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-beta.0 ClusterName:no-preload-395127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:30:19.603502   55454 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1217 01:30:19.603559   55454 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:19.638093   55454 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1217 01:30:19.638117   55454 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 01:30:19.638181   55454 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:19.638203   55454 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.638242   55454 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.638257   55454 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.638203   55454 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.638430   55454 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.638449   55454 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.638470   55454 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 01:30:19.640098   55454 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.640324   55454 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.640351   55454 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:19.640476   55454 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.640792   55454 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.641068   55454 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 01:30:19.641609   55454 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.641688   55454 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.760771   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.763269   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.769338   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.784594   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.787268   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 01:30:19.801226   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.812037   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.858294   55454 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1217 01:30:19.858331   55454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.858377   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.961625   55454 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1217 01:30:19.961676   55454 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.961677   55454 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1217 01:30:19.961709   55454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.961718   55454 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1217 01:30:19.961732   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.961740   55454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.961761   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.961774   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.967153   55454 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 01:30:19.967186   55454 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 01:30:19.967228   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.988225   55454 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1217 01:30:19.988267   55454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:19.988316   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.991260   55454 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 01:30:19.991298   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:19.991301   55454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:19.991323   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:19.991342   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:19.991388   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:19.991397   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:19.991452   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:30:19.999640   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:20.113226   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:20.113226   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:20.115274   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:30:20.115306   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:20.115325   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:20.115278   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:20.115331   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:20.246224   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:30:20.246304   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:30:20.246333   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:20.246356   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:30:20.246363   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:30:20.246401   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:30:20.246417   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 01:30:20.369651   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:20.369696   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0
	I1217 01:30:20.369765   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:20.369787   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:30:20.369802   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:20.369831   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:20.369849   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0
	I1217 01:30:20.369767   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:20.369769   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:30:20.369895   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:20.369910   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:30:20.369807   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1217 01:30:20.370026   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 01:30:20.424105   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 01:30:20.424135   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424166   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1217 01:30:20.424170   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1217 01:30:20.424183   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1217 01:30:20.424220   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:30:20.424220   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424282   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1217 01:30:20.424308   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424251   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 01:30:20.424329   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1217 01:30:20.424291   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1217 01:30:20.424382   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1217 01:30:20.424354   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 01:30:20.444543   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 01:30:20.444571   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 01:30:20.546396   55454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:20.556961   55454 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 01:30:20.557051   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1217 01:30:20.751302   55454 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 01:30:20.751348   55454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:20.751408   55454 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.061521   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:21.061567   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1217 01:30:21.061609   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:21.061675   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:30:21.208821   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:19.504720   55831 out.go:252] * Updating the running kvm2 "pause-716229" VM ...
	I1217 01:30:19.504749   55831 machine.go:94] provisionDockerMachine start ...
	I1217 01:30:19.507267   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.507772   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.507796   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.507953   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.508187   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.508203   55831 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:30:19.638055   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-716229
	
	I1217 01:30:19.638093   55831 buildroot.go:166] provisioning hostname "pause-716229"
	I1217 01:30:19.643779   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.644347   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.644374   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.644571   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.644819   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.644833   55831 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-716229 && echo "pause-716229" | sudo tee /etc/hostname
	I1217 01:30:19.787607   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-716229
	
	I1217 01:30:19.791566   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.792103   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.792138   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.792331   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:19.792620   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:19.792645   55831 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-716229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-716229/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-716229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:30:19.917161   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:30:19.917193   55831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22168-12839/.minikube CaCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22168-12839/.minikube}
	I1217 01:30:19.917223   55831 buildroot.go:174] setting up certificates
	I1217 01:30:19.917236   55831 provision.go:84] configureAuth start
	I1217 01:30:19.920490   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.921061   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.921099   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924101   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924504   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.924527   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.924692   55831 provision.go:143] copyHostCerts
	I1217 01:30:19.924752   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem, removing ...
	I1217 01:30:19.924772   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem
	I1217 01:30:19.924841   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/ca.pem (1078 bytes)
	I1217 01:30:19.924988   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem, removing ...
	I1217 01:30:19.925002   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem
	I1217 01:30:19.925047   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/cert.pem (1123 bytes)
	I1217 01:30:19.925151   55831 exec_runner.go:144] found /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem, removing ...
	I1217 01:30:19.925175   55831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem
	I1217 01:30:19.925208   55831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22168-12839/.minikube/key.pem (1679 bytes)
	I1217 01:30:19.925360   55831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem org=jenkins.pause-716229 san=[127.0.0.1 192.168.61.9 localhost minikube pause-716229]
	I1217 01:30:19.984817   55831 provision.go:177] copyRemoteCerts
	I1217 01:30:19.984903   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:30:19.987915   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.988514   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:19.988557   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:19.988798   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:20.085814   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:30:20.125959   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1217 01:30:20.166703   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:30:20.208381   55831 provision.go:87] duration metric: took 291.127344ms to configureAuth
	I1217 01:30:20.208410   55831 buildroot.go:189] setting minikube options for container-runtime
	I1217 01:30:20.208679   55831 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:30:20.212425   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:20.212953   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:20.212990   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:20.213266   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:20.213561   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:20.213591   55831 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 01:30:21.896534   51086 api_server.go:269] stopped: https://192.168.39.33:8443/healthz: Get "https://192.168.39.33:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 01:30:21.896605   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:30:21.896668   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:30:21.939001   51086 cri.go:89] found id: "f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:21.939045   51086 cri.go:89] found id: "a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e"
	I1217 01:30:21.939052   51086 cri.go:89] found id: ""
	I1217 01:30:21.939061   51086 logs.go:282] 2 containers: [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060 a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e]
	I1217 01:30:21.939136   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.944115   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.948749   51086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:30:21.948831   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:30:21.986091   51086 cri.go:89] found id: "4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:21.986131   51086 cri.go:89] found id: ""
	I1217 01:30:21.986141   51086 logs.go:282] 1 containers: [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23]
	I1217 01:30:21.986213   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:21.990716   51086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:30:21.990789   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:30:22.030581   51086 cri.go:89] found id: "4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:22.030611   51086 cri.go:89] found id: "af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:22.030617   51086 cri.go:89] found id: ""
	I1217 01:30:22.030627   51086 logs.go:282] 2 containers: [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c]
	I1217 01:30:22.030696   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.035383   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.039755   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:30:22.039838   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:30:22.077306   51086 cri.go:89] found id: "11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:22.077340   51086 cri.go:89] found id: "fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:22.077348   51086 cri.go:89] found id: ""
	I1217 01:30:22.077357   51086 logs.go:282] 2 containers: [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722]
	I1217 01:30:22.077426   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.082080   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.086755   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:30:22.086839   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:30:22.134568   51086 cri.go:89] found id: "2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:22.134592   51086 cri.go:89] found id: ""
	I1217 01:30:22.134602   51086 logs.go:282] 1 containers: [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13]
	I1217 01:30:22.134659   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.140647   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:30:22.140723   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:30:22.182597   51086 cri.go:89] found id: "92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:22.182621   51086 cri.go:89] found id: "889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c"
	I1217 01:30:22.182626   51086 cri.go:89] found id: ""
	I1217 01:30:22.182636   51086 logs.go:282] 2 containers: [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9 889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c]
	I1217 01:30:22.182708   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.187128   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.191716   51086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:30:22.191782   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:30:22.237439   51086 cri.go:89] found id: ""
	I1217 01:30:22.237467   51086 logs.go:282] 0 containers: []
	W1217 01:30:22.237479   51086 logs.go:284] No container was found matching "kindnet"
	I1217 01:30:22.237488   51086 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:30:22.237553   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:30:22.290443   51086 cri.go:89] found id: "04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:22.290535   51086 cri.go:89] found id: "02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:22.290546   51086 cri.go:89] found id: ""
	I1217 01:30:22.290572   51086 logs.go:282] 2 containers: [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24]
	I1217 01:30:22.290640   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.295256   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:22.300346   51086 logs.go:123] Gathering logs for coredns [af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c] ...
	I1217 01:30:22.300369   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:22.348678   51086 logs.go:123] Gathering logs for kube-scheduler [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b] ...
	I1217 01:30:22.348714   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:22.445622   51086 logs.go:123] Gathering logs for kube-scheduler [fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722] ...
	I1217 01:30:22.445661   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:22.491628   51086 logs.go:123] Gathering logs for kube-controller-manager [889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c] ...
	I1217 01:30:22.491655   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 889d1c9d59279febbb656a88595a686eec2af7afcbdd3130103e5d346977780c"
	I1217 01:30:22.538713   51086 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:30:22.538758   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:30:22.891737   51086 logs.go:123] Gathering logs for dmesg ...
	I1217 01:30:22.891773   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:30:22.908994   51086 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:30:22.909037   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 01:30:23.022809   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:23.523273   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:24.023255   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:24.523359   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:25.022434   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:25.522884   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:26.022354   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:26.522298   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:27.022818   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:27.522335   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:23.777589   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: (2.715885236s)
	I1217 01:30:23.777620   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-beta.0 from cache
	I1217 01:30:23.777651   55454 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:30:23.777650   55454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.568791096s)
	I1217 01:30:23.777702   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:30:23.777718   55454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:26.064481   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.5-0: (2.286752732s)
	I1217 01:30:26.064521   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.5-0 from cache
	I1217 01:30:26.064547   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:26.064608   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:30:26.064544   55454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.28680286s)
	I1217 01:30:26.064682   55454 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 01:30:26.064775   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:30:28.022847   55074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 01:30:28.226137   55074 kubeadm.go:1114] duration metric: took 12.443127084s to wait for elevateKubeSystemPrivileges
	I1217 01:30:28.226177   55074 kubeadm.go:403] duration metric: took 24.315193301s to StartCluster
	I1217 01:30:28.226196   55074 settings.go:142] acquiring lock: {Name:mk0fa06a6a557f0851b041158306daec92094c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:28.226284   55074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:30:28.227716   55074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/kubeconfig: {Name:mk0867cff530c231805e36a9674d4fe6612173a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:28.228005   55074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.223 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 01:30:28.228074   55074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 01:30:28.228225   55074 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-625875"
	I1217 01:30:28.228244   55074 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-625875"
	I1217 01:30:28.228244   55074 config.go:182] Loaded profile config "old-k8s-version-625875": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 01:30:28.228281   55074 host.go:66] Checking if "old-k8s-version-625875" exists ...
	I1217 01:30:28.228053   55074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 01:30:28.228375   55074 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-625875"
	I1217 01:30:28.228410   55074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-625875"
	I1217 01:30:28.229913   55074 out.go:179] * Verifying Kubernetes components...
	I1217 01:30:28.232530   55074 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-625875"
	I1217 01:30:28.232567   55074 host.go:66] Checking if "old-k8s-version-625875" exists ...
	I1217 01:30:28.233234   55074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:30:25.882124   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 01:30:25.882158   55831 machine.go:97] duration metric: took 6.37739993s to provisionDockerMachine
	I1217 01:30:25.882173   55831 start.go:293] postStartSetup for "pause-716229" (driver="kvm2")
	I1217 01:30:25.882210   55831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:30:25.882298   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:30:25.886127   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:25.886654   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:25.886683   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:25.886836   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:25.981319   55831 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:30:25.986394   55831 info.go:137] Remote host: Buildroot 2025.02
	I1217 01:30:25.986418   55831 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/addons for local assets ...
	I1217 01:30:25.986487   55831 filesync.go:126] Scanning /home/jenkins/minikube-integration/22168-12839/.minikube/files for local assets ...
	I1217 01:30:25.986592   55831 filesync.go:149] local asset: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem -> 170742.pem in /etc/ssl/certs
	I1217 01:30:25.986710   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:30:25.999107   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:26.032262   55831 start.go:296] duration metric: took 150.072721ms for postStartSetup
	I1217 01:30:26.032309   55831 fix.go:56] duration metric: took 6.531204073s for fixHost
	I1217 01:30:26.035448   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.035824   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.035847   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.036044   55831 main.go:143] libmachine: Using SSH client type: native
	I1217 01:30:26.036305   55831 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.61.9 22 <nil> <nil>}
	I1217 01:30:26.036319   55831 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1217 01:30:26.159443   55831 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765935026.155536878
	
	I1217 01:30:26.159474   55831 fix.go:216] guest clock: 1765935026.155536878
	I1217 01:30:26.159482   55831 fix.go:229] Guest: 2025-12-17 01:30:26.155536878 +0000 UTC Remote: 2025-12-17 01:30:26.032314252 +0000 UTC m=+6.652094255 (delta=123.222626ms)
	I1217 01:30:26.159501   55831 fix.go:200] guest clock delta is within tolerance: 123.222626ms
	I1217 01:30:26.159507   55831 start.go:83] releasing machines lock for "pause-716229", held for 6.658418729s
	I1217 01:30:26.163103   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.163720   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.163776   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.164643   55831 ssh_runner.go:195] Run: cat /version.json
	I1217 01:30:26.164777   55831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 01:30:26.168104   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168166   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168563   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.168607   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168657   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:26.168705   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:26.168817   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:26.169069   55831 sshutil.go:53] new ssh client: &{IP:192.168.61.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/pause-716229/id_rsa Username:docker}
	I1217 01:30:26.315315   55831 ssh_runner.go:195] Run: systemctl --version
	I1217 01:30:26.328723   55831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 01:30:26.561010   55831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:30:26.585959   55831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:30:26.586076   55831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:30:26.641658   55831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:30:26.641684   55831 start.go:496] detecting cgroup driver to use...
	I1217 01:30:26.641768   55831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:30:26.693772   55831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:30:26.719728   55831 docker.go:218] disabling cri-docker service (if available) ...
	I1217 01:30:26.719802   55831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 01:30:26.765707   55831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 01:30:26.805381   55831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 01:30:27.223191   55831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 01:30:27.618734   55831 docker.go:234] disabling docker service ...
	I1217 01:30:27.618813   55831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 01:30:27.696855   55831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 01:30:27.740817   55831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 01:30:28.152831   55831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 01:30:28.522103   55831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:30:28.547218   55831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:30:28.592003   55831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 01:30:28.592089   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.637881   55831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1217 01:30:28.637959   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.676778   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.703880   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.732130   55831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:30:28.761140   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.777472   55831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.798808   55831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 01:30:28.822861   55831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:30:28.842091   55831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:30:28.864630   55831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:29.210820   55831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 01:30:28.233405   55074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:28.234435   55074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 01:30:28.234455   55074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 01:30:28.234642   55074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:30:28.234657   55074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 01:30:28.239255   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.239783   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.239826   55074 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:10:92", ip: ""} in network mk-old-k8s-version-625875: {Iface:virbr4 ExpiryTime:2025-12-17 02:29:51 +0000 UTC Type:0 Mac:52:54:00:dd:10:92 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:old-k8s-version-625875 Clientid:01:52:54:00:dd:10:92}
	I1217 01:30:28.239864   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined IP address 192.168.72.223 and MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.240468   55074 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/old-k8s-version-625875/id_rsa Username:docker}
	I1217 01:30:28.241049   55074 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dd:10:92", ip: ""} in network mk-old-k8s-version-625875: {Iface:virbr4 ExpiryTime:2025-12-17 02:29:51 +0000 UTC Type:0 Mac:52:54:00:dd:10:92 Iaid: IPaddr:192.168.72.223 Prefix:24 Hostname:old-k8s-version-625875 Clientid:01:52:54:00:dd:10:92}
	I1217 01:30:28.241085   55074 main.go:143] libmachine: domain old-k8s-version-625875 has defined IP address 192.168.72.223 and MAC address 52:54:00:dd:10:92 in network mk-old-k8s-version-625875
	I1217 01:30:28.241610   55074 sshutil.go:53] new ssh client: &{IP:192.168.72.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/old-k8s-version-625875/id_rsa Username:docker}
	I1217 01:30:28.415211   55074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 01:30:28.564576   55074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:28.745229   55074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 01:30:28.892347   55074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 01:30:30.293922   55074 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.878664512s)
	I1217 01:30:30.293960   55074 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1217 01:30:30.293976   55074 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.729336989s)
	I1217 01:30:30.294042   55074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.548756208s)
	I1217 01:30:30.295294   55074 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-625875" to be "Ready" ...
	I1217 01:30:30.320339   55074 node_ready.go:49] node "old-k8s-version-625875" is "Ready"
	I1217 01:30:30.320366   55074 node_ready.go:38] duration metric: took 25.031141ms for node "old-k8s-version-625875" to be "Ready" ...
	I1217 01:30:30.320381   55074 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:30:30.320451   55074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:30:30.783877   55074 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.891482625s)
	I1217 01:30:30.783882   55074 api_server.go:72] duration metric: took 2.555828475s to wait for apiserver process to appear ...
	I1217 01:30:30.783964   55074 api_server.go:88] waiting for apiserver healthz status ...
	I1217 01:30:30.784004   55074 api_server.go:253] Checking apiserver healthz at https://192.168.72.223:8443/healthz ...
	I1217 01:30:30.786946   55074 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1217 01:30:30.788431   55074 addons.go:530] duration metric: took 2.56035709s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1217 01:30:30.795203   55074 api_server.go:279] https://192.168.72.223:8443/healthz returned 200:
	ok
	I1217 01:30:30.799771   55074 api_server.go:141] control plane version: v1.28.0
	I1217 01:30:30.799807   55074 api_server.go:131] duration metric: took 15.821862ms to wait for apiserver health ...
	I1217 01:30:30.799822   55074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:30:30.801046   55074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-625875" context rescaled to 1 replicas
	I1217 01:30:30.835925   55074 system_pods.go:59] 8 kube-system pods found
	I1217 01:30:30.835968   55074 system_pods.go:61] "coredns-5dd5756b68-d46w4" [fed3a512-ea6c-4689-bd7b-e20329782c19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.835980   55074 system_pods.go:61] "coredns-5dd5756b68-zrj9b" [b062fb48-19a3-4f1b-8b82-ee4be095f5be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.835989   55074 system_pods.go:61] "etcd-old-k8s-version-625875" [fc2b860d-240b-4306-b025-cf994e2e27e1] Running
	I1217 01:30:30.835996   55074 system_pods.go:61] "kube-apiserver-old-k8s-version-625875" [230920a9-2718-4952-8d7d-54244dbf3677] Running
	I1217 01:30:30.836001   55074 system_pods.go:61] "kube-controller-manager-old-k8s-version-625875" [2dd1a95b-0eb2-4084-9b77-e8ad47662e1a] Running
	I1217 01:30:30.836007   55074 system_pods.go:61] "kube-proxy-knddz" [851cb2e5-3111-4b21-9295-ffb6800af552] Running
	I1217 01:30:30.836015   55074 system_pods.go:61] "kube-scheduler-old-k8s-version-625875" [953e65e2-9fd5-456a-b0a2-d24b0c7c5945] Running
	I1217 01:30:30.836063   55074 system_pods.go:61] "storage-provisioner" [c0f7895a-51be-400b-bc50-2ade62ea8883] Pending
	I1217 01:30:30.836072   55074 system_pods.go:74] duration metric: took 36.241884ms to wait for pod list to return data ...
	I1217 01:30:30.836082   55074 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:30:30.852150   55074 default_sa.go:45] found service account: "default"
	I1217 01:30:30.852181   55074 default_sa.go:55] duration metric: took 16.088177ms for default service account to be created ...
	I1217 01:30:30.852194   55074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:30:30.869152   55074 system_pods.go:86] 8 kube-system pods found
	I1217 01:30:30.869192   55074 system_pods.go:89] "coredns-5dd5756b68-d46w4" [fed3a512-ea6c-4689-bd7b-e20329782c19] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.869202   55074 system_pods.go:89] "coredns-5dd5756b68-zrj9b" [b062fb48-19a3-4f1b-8b82-ee4be095f5be] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:30.869211   55074 system_pods.go:89] "etcd-old-k8s-version-625875" [fc2b860d-240b-4306-b025-cf994e2e27e1] Running
	I1217 01:30:30.869219   55074 system_pods.go:89] "kube-apiserver-old-k8s-version-625875" [230920a9-2718-4952-8d7d-54244dbf3677] Running
	I1217 01:30:30.869224   55074 system_pods.go:89] "kube-controller-manager-old-k8s-version-625875" [2dd1a95b-0eb2-4084-9b77-e8ad47662e1a] Running
	I1217 01:30:30.869230   55074 system_pods.go:89] "kube-proxy-knddz" [851cb2e5-3111-4b21-9295-ffb6800af552] Running
	I1217 01:30:30.869235   55074 system_pods.go:89] "kube-scheduler-old-k8s-version-625875" [953e65e2-9fd5-456a-b0a2-d24b0c7c5945] Running
	I1217 01:30:30.869243   55074 system_pods.go:89] "storage-provisioner" [c0f7895a-51be-400b-bc50-2ade62ea8883] Pending
	I1217 01:30:30.869271   55074 retry.go:31] will retry after 285.428706ms: missing components: kube-dns
	I1217 01:30:31.178416   55074 system_pods.go:86] 8 kube-system pods found
	I1217 01:30:31.178470   55074 system_pods.go:89] "coredns-5dd5756b68-d46w4" [fed3a512-ea6c-4689-bd7b-e20329782c19] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:30:31.178481   55074 system_pods.go:89] "coredns-5dd5756b68-zrj9b" [b062fb48-19a3-4f1b-8b82-ee4be095f5be] Failed / Ready:PodFailed / ContainersReady:PodFailed
	I1217 01:30:31.178489   55074 system_pods.go:89] "etcd-old-k8s-version-625875" [fc2b860d-240b-4306-b025-cf994e2e27e1] Running
	I1217 01:30:31.178494   55074 system_pods.go:89] "kube-apiserver-old-k8s-version-625875" [230920a9-2718-4952-8d7d-54244dbf3677] Running
	I1217 01:30:31.178499   55074 system_pods.go:89] "kube-controller-manager-old-k8s-version-625875" [2dd1a95b-0eb2-4084-9b77-e8ad47662e1a] Running
	I1217 01:30:31.178504   55074 system_pods.go:89] "kube-proxy-knddz" [851cb2e5-3111-4b21-9295-ffb6800af552] Running
	I1217 01:30:31.178509   55074 system_pods.go:89] "kube-scheduler-old-k8s-version-625875" [953e65e2-9fd5-456a-b0a2-d24b0c7c5945] Running
	I1217 01:30:31.178516   55074 system_pods.go:89] "storage-provisioner" [c0f7895a-51be-400b-bc50-2ade62ea8883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:30:31.178526   55074 system_pods.go:126] duration metric: took 326.324606ms to wait for k8s-apps to be running ...
	I1217 01:30:31.178537   55074 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:30:31.178590   55074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:30:31.203345   55074 system_svc.go:56] duration metric: took 24.794791ms WaitForService to wait for kubelet
	I1217 01:30:31.203381   55074 kubeadm.go:587] duration metric: took 2.975329276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:30:31.203425   55074 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:30:31.206446   55074 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1217 01:30:31.206482   55074 node_conditions.go:123] node cpu capacity is 2
	I1217 01:30:31.206500   55074 node_conditions.go:105] duration metric: took 3.068585ms to run NodePressure ...
	I1217 01:30:31.206516   55074 start.go:242] waiting for startup goroutines ...
	I1217 01:30:31.206530   55074 start.go:247] waiting for cluster config update ...
	I1217 01:30:31.206543   55074 start.go:256] writing updated cluster config ...
	I1217 01:30:31.206863   55074 ssh_runner.go:195] Run: rm -f paused
	I1217 01:30:31.214476   55074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:30:31.220146   55074 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-d46w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.226293   55074 pod_ready.go:94] pod "coredns-5dd5756b68-d46w4" is "Ready"
	I1217 01:30:32.226320   55074 pod_ready.go:86] duration metric: took 1.006144632s for pod "coredns-5dd5756b68-d46w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.226331   55074 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.229552   55074 pod_ready.go:99] pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace is gone: getting pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace (will retry): pods "coredns-5dd5756b68-zrj9b" not found
	I1217 01:30:32.229570   55074 pod_ready.go:86] duration metric: took 3.232306ms for pod "coredns-5dd5756b68-zrj9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.233254   55074 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.239569   55074 pod_ready.go:94] pod "etcd-old-k8s-version-625875" is "Ready"
	I1217 01:30:32.239600   55074 pod_ready.go:86] duration metric: took 6.320312ms for pod "etcd-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.244167   55074 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.252968   55074 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-625875" is "Ready"
	I1217 01:30:32.252992   55074 pod_ready.go:86] duration metric: took 8.802019ms for pod "kube-apiserver-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.259974   55074 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:32.624417   55074 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-625875" is "Ready"
	I1217 01:30:32.624459   55074 pod_ready.go:86] duration metric: took 364.460199ms for pod "kube-controller-manager-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:28.032357   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: (1.967721827s)
	I1217 01:30:28.032400   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-beta.0 from cache
	I1217 01:30:28.032425   55454 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:30:28.032450   55454 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.967630397s)
	I1217 01:30:28.032477   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:30:28.032484   55454 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 01:30:28.032513   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1217 01:30:30.343204   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (2.310681804s)
	I1217 01:30:30.343254   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 01:30:30.343296   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:30.343391   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:30:32.110834   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: (1.767419827s)
	I1217 01:30:32.110866   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-beta.0 from cache
	I1217 01:30:32.110910   55454 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:30:32.110971   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:30:32.829000   55074 pod_ready.go:83] waiting for pod "kube-proxy-knddz" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.224703   55074 pod_ready.go:94] pod "kube-proxy-knddz" is "Ready"
	I1217 01:30:33.224737   55074 pod_ready.go:86] duration metric: took 395.691073ms for pod "kube-proxy-knddz" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.425807   55074 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.823856   55074 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-625875" is "Ready"
	I1217 01:30:33.823882   55074 pod_ready.go:86] duration metric: took 398.049592ms for pod "kube-scheduler-old-k8s-version-625875" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:30:33.823895   55074 pod_ready.go:40] duration metric: took 2.609377607s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:30:33.877121   55074 start.go:625] kubectl: 1.34.3, cluster: 1.28.0 (minor skew: 6)
	I1217 01:30:33.928163   55074 out.go:203] 
	W1217 01:30:33.929163   55074 out.go:285] ! /usr/local/bin/kubectl is version 1.34.3, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 01:30:33.930598   55074 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 01:30:33.932525   55074 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-625875" cluster and "default" namespace by default
	I1217 01:30:32.997065   51086 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.088003038s)
	W1217 01:30:32.997111   51086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 01:30:32.997120   51086 logs.go:123] Gathering logs for kube-apiserver [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060] ...
	I1217 01:30:32.997133   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:33.054723   51086 logs.go:123] Gathering logs for etcd [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23] ...
	I1217 01:30:33.054757   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:33.101351   51086 logs.go:123] Gathering logs for coredns [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb] ...
	I1217 01:30:33.101381   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:33.161254   51086 logs.go:123] Gathering logs for storage-provisioner [02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24] ...
	I1217 01:30:33.161286   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:33.210096   51086 logs.go:123] Gathering logs for kubelet ...
	I1217 01:30:33.210135   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:30:33.322873   51086 logs.go:123] Gathering logs for kube-proxy [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13] ...
	I1217 01:30:33.322910   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:33.376240   51086 logs.go:123] Gathering logs for kube-apiserver [a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e] ...
	I1217 01:30:33.376269   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a60d85d10467e0ec2ad371d5ea0776e03d016cdc978561fa1498e90cabe0974e"
	I1217 01:30:33.421703   51086 logs.go:123] Gathering logs for kube-controller-manager [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9] ...
	I1217 01:30:33.421743   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:33.461816   51086 logs.go:123] Gathering logs for storage-provisioner [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274] ...
	I1217 01:30:33.461843   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:33.510339   51086 logs.go:123] Gathering logs for container status ...
	I1217 01:30:33.510374   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:30:34.019660   55454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: (1.9086575s)
	I1217 01:30:34.019703   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-beta.0 from cache
	I1217 01:30:34.019738   55454 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:30:34.019801   55454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:30:34.776845   55454 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22168-12839/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 01:30:34.776902   55454 cache_images.go:125] Successfully loaded all cached images
	I1217 01:30:34.776909   55454 cache_images.go:94] duration metric: took 15.13877712s to LoadCachedImages
	I1217 01:30:34.776926   55454 kubeadm.go:935] updating node { 192.168.83.246 8443 v1.35.0-beta.0 crio true true} ...
	I1217 01:30:34.777054   55454 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-395127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-395127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:34.777155   55454 ssh_runner.go:195] Run: crio config
	I1217 01:30:34.830207   55454 cni.go:84] Creating CNI manager for ""
	I1217 01:30:34.830235   55454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:34.830253   55454 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:30:34.830277   55454 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.246 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-395127 NodeName:no-preload-395127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:30:34.830392   55454 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-395127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.246"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.246"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:30:34.830467   55454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:30:34.843265   55454 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1217 01:30:34.843322   55454 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:30:34.855911   55454 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256
	I1217 01:30:34.855999   55454 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm
	I1217 01:30:34.856013   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1217 01:30:34.856097   55454 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet
	I1217 01:30:34.861550   55454 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1217 01:30:34.861574   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1217 01:30:35.776063   55454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:30:35.793356   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1217 01:30:35.798423   55454 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1217 01:30:35.798461   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1217 01:30:35.894607   55454 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1217 01:30:35.902485   55454 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1217 01:30:35.902533   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1217 01:30:36.398407   55454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:30:36.412721   55454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1217 01:30:36.441530   55454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:30:36.468394   55454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1217 01:30:36.492467   55454 ssh_runner.go:195] Run: grep 192.168.83.246	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:36.497905   55454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:30:36.518712   55454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:36.694158   55454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:36.724440   55454 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127 for IP: 192.168.83.246
	I1217 01:30:36.724465   55454 certs.go:195] generating shared ca certs ...
	I1217 01:30:36.724486   55454 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.724683   55454 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 01:30:36.724756   55454 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 01:30:36.724773   55454 certs.go:257] generating profile certs ...
	I1217 01:30:36.724862   55454 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.key
	I1217 01:30:36.724881   55454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt with IP's: []
	I1217 01:30:36.751766   55454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt ...
	I1217 01:30:36.751806   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: {Name:mkd427468268a7fd4ff3ed24fee2d61ff6038b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.752110   55454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.key ...
	I1217 01:30:36.752138   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.key: {Name:mk10e663a87f7a4413195cb7b6ea70cdafef6e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.752288   55454 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a
	I1217 01:30:36.752318   55454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.246]
	I1217 01:30:36.849835   55454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a ...
	I1217 01:30:36.849861   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a: {Name:mk62f564f24f1af6c95e8e9a784ebc7ccfcac644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.850174   55454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a ...
	I1217 01:30:36.850194   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a: {Name:mk4a9d85ca062dc4a05f2714b655be2778d32e5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.850302   55454 certs.go:382] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt.a1d3f72a -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt
	I1217 01:30:36.850403   55454 certs.go:386] copying /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key.a1d3f72a -> /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key
	I1217 01:30:36.850479   55454 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key
	I1217 01:30:36.850496   55454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt with IP's: []
	I1217 01:30:36.895726   55454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt ...
	I1217 01:30:36.895753   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt: {Name:mkeaac75e9f3a41f3997c4b91dc787a6f4fe703e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.895936   55454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key ...
	I1217 01:30:36.895956   55454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key: {Name:mk677012196e2c8d5d593091aa91eb449a55dbf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:36.896226   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 01:30:36.896278   55454 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:36.896296   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 01:30:36.896329   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 01:30:36.896361   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:36.896407   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 01:30:36.896473   55454 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:36.897051   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:36.933336   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:36.969808   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:37.005731   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:37.040631   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:30:37.074542   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:37.107663   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:37.139407   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:30:37.174373   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 01:30:37.208403   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 01:30:37.245797   55454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:37.281981   55454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:30:37.307979   55454 ssh_runner.go:195] Run: openssl version
	I1217 01:30:37.315190   55454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.327810   55454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 01:30:37.340479   55454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.347676   55454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.347778   55454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 01:30:37.356219   55454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:37.368216   55454 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/17074.pem /etc/ssl/certs/51391683.0
	I1217 01:30:37.380940   55454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.395106   55454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 01:30:37.411341   55454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.420147   55454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.420221   55454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 01:30:37.430632   55454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:37.446229   55454 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/170742.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:37.461407   55454 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.475964   55454 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:37.493415   55454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.500230   55454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.500309   55454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:37.510547   55454 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:37.526312   55454 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:30:37.539865   55454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:37.545417   55454 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:30:37.545484   55454 kubeadm.go:401] StartCluster: {Name:no-preload-395127 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-beta.0 ClusterName:no-preload-395127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.246 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:37.545577   55454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:30:37.545631   55454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:30:37.589462   55454 cri.go:89] found id: ""
	I1217 01:30:37.589534   55454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:30:37.605156   55454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:30:37.619956   55454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:30:37.632973   55454 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:30:37.632997   55454 kubeadm.go:158] found existing configuration files:
	
	I1217 01:30:37.633063   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:30:37.645319   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:30:37.645395   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:30:37.658341   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:30:37.671967   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:30:37.672066   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:30:37.686062   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:30:37.698805   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:30:37.698877   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:30:37.712943   55454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:30:37.725482   55454 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:30:37.725539   55454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:30:37.740564   55454 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1217 01:30:39.511179   55831 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.300301609s)
	I1217 01:30:39.511222   55831 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 01:30:39.511275   55831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 01:30:39.517085   55831 start.go:564] Will wait 60s for crictl version
	I1217 01:30:39.517172   55831 ssh_runner.go:195] Run: which crictl
	I1217 01:30:39.521729   55831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1217 01:30:39.615915   55831 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1217 01:30:39.616011   55831 ssh_runner.go:195] Run: crio --version
	I1217 01:30:39.673265   55831 ssh_runner.go:195] Run: crio --version
	I1217 01:30:39.728526   55831 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1217 01:30:36.068083   51086 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I1217 01:30:36.069202   51086 api_server.go:269] stopped: https://192.168.39.33:8443/healthz: Get "https://192.168.39.33:8443/healthz": dial tcp 192.168.39.33:8443: connect: connection refused
	I1217 01:30:36.069329   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:30:36.069433   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:30:36.128104   51086 cri.go:89] found id: "f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:36.128138   51086 cri.go:89] found id: ""
	I1217 01:30:36.128150   51086 logs.go:282] 1 containers: [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060]
	I1217 01:30:36.128223   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.135012   51086 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:30:36.135127   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:30:36.191894   51086 cri.go:89] found id: "4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:36.191921   51086 cri.go:89] found id: ""
	I1217 01:30:36.191933   51086 logs.go:282] 1 containers: [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23]
	I1217 01:30:36.191999   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.197585   51086 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:30:36.197684   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:30:36.256920   51086 cri.go:89] found id: "4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:36.256951   51086 cri.go:89] found id: "af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:36.256957   51086 cri.go:89] found id: ""
	I1217 01:30:36.256965   51086 logs.go:282] 2 containers: [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c]
	I1217 01:30:36.257043   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.262692   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.267924   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:30:36.268002   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:30:36.315004   51086 cri.go:89] found id: "11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:36.315047   51086 cri.go:89] found id: "fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:36.315055   51086 cri.go:89] found id: ""
	I1217 01:30:36.315065   51086 logs.go:282] 2 containers: [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722]
	I1217 01:30:36.315138   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.321060   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.327363   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:30:36.327452   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:30:36.389688   51086 cri.go:89] found id: "2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:36.389718   51086 cri.go:89] found id: ""
	I1217 01:30:36.389727   51086 logs.go:282] 1 containers: [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13]
	I1217 01:30:36.389793   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.394620   51086 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:30:36.394710   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:30:36.439277   51086 cri.go:89] found id: "92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:36.439305   51086 cri.go:89] found id: ""
	I1217 01:30:36.439314   51086 logs.go:282] 1 containers: [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9]
	I1217 01:30:36.439368   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.444552   51086 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:30:36.444654   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:30:36.489054   51086 cri.go:89] found id: ""
	I1217 01:30:36.489087   51086 logs.go:282] 0 containers: []
	W1217 01:30:36.489095   51086 logs.go:284] No container was found matching "kindnet"
	I1217 01:30:36.489101   51086 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:30:36.489157   51086 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:30:36.529476   51086 cri.go:89] found id: "04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:36.529502   51086 cri.go:89] found id: "02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:36.529508   51086 cri.go:89] found id: ""
	I1217 01:30:36.529517   51086 logs.go:282] 2 containers: [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24]
	I1217 01:30:36.529582   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.534695   51086 ssh_runner.go:195] Run: which crictl
	I1217 01:30:36.539112   51086 logs.go:123] Gathering logs for etcd [4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23] ...
	I1217 01:30:36.539146   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f7ae7022f37ebd3af667ff2925618ae8cba786b93551c6a75ea31610b440a23"
	I1217 01:30:36.585461   51086 logs.go:123] Gathering logs for kube-scheduler [11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b] ...
	I1217 01:30:36.585494   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11c7ac9704180c5215010fc334223b3ee30a5e3fcbf5bc9668670b480422a43b"
	I1217 01:30:36.682813   51086 logs.go:123] Gathering logs for kube-scheduler [fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722] ...
	I1217 01:30:36.682856   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4edf606d72313439460b2d586b38df7af1e915c6576704df53a9f7e030a722"
	I1217 01:30:36.729668   51086 logs.go:123] Gathering logs for CRI-O ...
	I1217 01:30:36.729698   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 01:30:37.163566   51086 logs.go:123] Gathering logs for coredns [af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c] ...
	I1217 01:30:37.163612   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af1f0e1b40070c8723057b438d0b9fcec2ba7192a110956e8eb26a35dad9df7c"
	I1217 01:30:37.201517   51086 logs.go:123] Gathering logs for kubelet ...
	I1217 01:30:37.201568   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:30:37.307741   51086 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:30:37.307780   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:30:37.383323   51086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:30:37.383358   51086 logs.go:123] Gathering logs for kube-apiserver [f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060] ...
	I1217 01:30:37.383375   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f3fc173ad1f59bcf9f322121a37044b907e449b8da0ff80d4a58047149d92060"
	I1217 01:30:37.427800   51086 logs.go:123] Gathering logs for coredns [4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb] ...
	I1217 01:30:37.427830   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e737919b3d271d8ca8cbf13101f204569ffdaf5b0e2afcbd8ab8a5d2febccbb"
	I1217 01:30:37.492093   51086 logs.go:123] Gathering logs for kube-proxy [2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13] ...
	I1217 01:30:37.492156   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae0779ffea3f0ab8e7bd0cdb83221c0aa0fb5faea9cec5cb3745615ebae1c13"
	I1217 01:30:37.537089   51086 logs.go:123] Gathering logs for kube-controller-manager [92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9] ...
	I1217 01:30:37.537127   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92af4713cd6f5e3fd61ec4f8918aa425ca8e6b874f208f1ed31de3d4b68216c9"
	I1217 01:30:37.584499   51086 logs.go:123] Gathering logs for storage-provisioner [02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24] ...
	I1217 01:30:37.584533   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02ce2c49a1d0f774c8135d8233229979162a7bb38d6f23fcdc2cd03bd2231c24"
	I1217 01:30:37.632249   51086 logs.go:123] Gathering logs for container status ...
	I1217 01:30:37.632291   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:30:37.682220   51086 logs.go:123] Gathering logs for dmesg ...
	I1217 01:30:37.682253   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:30:37.700732   51086 logs.go:123] Gathering logs for storage-provisioner [04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274] ...
	I1217 01:30:37.700768   51086 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04162ab2dd069b12b50be94adf86c654f689f3762162dd1256b82a03c40d9274"
	I1217 01:30:37.979268   55454 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:30:39.733629   55831 main.go:143] libmachine: domain pause-716229 has defined MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:39.734200   55831 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:cd:79", ip: ""} in network mk-pause-716229: {Iface:virbr3 ExpiryTime:2025-12-17 02:29:14 +0000 UTC Type:0 Mac:52:54:00:41:cd:79 Iaid: IPaddr:192.168.61.9 Prefix:24 Hostname:pause-716229 Clientid:01:52:54:00:41:cd:79}
	I1217 01:30:39.734246   55831 main.go:143] libmachine: domain pause-716229 has defined IP address 192.168.61.9 and MAC address 52:54:00:41:cd:79 in network mk-pause-716229
	I1217 01:30:39.734572   55831 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1217 01:30:39.743102   55831 kubeadm.go:884] updating cluster {Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia
-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:30:39.743327   55831 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1217 01:30:39.743396   55831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:39.901711   55831 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:30:39.901745   55831 crio.go:433] Images already preloaded, skipping extraction
	I1217 01:30:39.901815   55831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 01:30:39.983579   55831 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 01:30:39.983613   55831 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:30:39.983624   55831 kubeadm.go:935] updating node { 192.168.61.9 8443 v1.34.2 crio true true} ...
	I1217 01:30:39.983759   55831 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-716229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.9
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:30:39.983876   55831 ssh_runner.go:195] Run: crio config
	I1217 01:30:40.089652   55831 cni.go:84] Creating CNI manager for ""
	I1217 01:30:40.089693   55831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 01:30:40.089712   55831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:30:40.089741   55831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.9 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-716229 NodeName:pause-716229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.9"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.9 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:30:40.089943   55831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.9
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-716229"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.9"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.9"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:30:40.090113   55831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1217 01:30:40.118423   55831 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:30:40.118516   55831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:30:40.140559   55831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1217 01:30:40.182106   55831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:30:40.207405   55831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 01:30:40.262429   55831 ssh_runner.go:195] Run: grep 192.168.61.9	control-plane.minikube.internal$ /etc/hosts
	I1217 01:30:40.281797   55831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:30:40.620533   55831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:30:40.649489   55831 certs.go:69] Setting up /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229 for IP: 192.168.61.9
	I1217 01:30:40.649513   55831 certs.go:195] generating shared ca certs ...
	I1217 01:30:40.649530   55831 certs.go:227] acquiring lock for ca certs: {Name:mk381e1d576792ac916a6048c2225a8ab856de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:30:40.649705   55831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key
	I1217 01:30:40.649778   55831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key
	I1217 01:30:40.649806   55831 certs.go:257] generating profile certs ...
	I1217 01:30:40.649956   55831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/client.key
	I1217 01:30:40.650102   55831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.key.9d9987e4
	I1217 01:30:40.650170   55831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.key
	I1217 01:30:40.650357   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem (1338 bytes)
	W1217 01:30:40.650396   55831 certs.go:480] ignoring /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074_empty.pem, impossibly tiny 0 bytes
	I1217 01:30:40.650405   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca-key.pem (1679 bytes)
	I1217 01:30:40.650431   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/ca.pem (1078 bytes)
	I1217 01:30:40.650453   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/cert.pem (1123 bytes)
	I1217 01:30:40.650483   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/certs/key.pem (1679 bytes)
	I1217 01:30:40.650529   55831 certs.go:484] found cert: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem (1708 bytes)
	I1217 01:30:40.651172   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:30:40.707541   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:30:40.769292   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:30:40.816066   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:30:40.860727   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 01:30:40.900973   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:30:40.934536   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:30:40.970705   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/pause-716229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:30:41.004205   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/ssl/certs/170742.pem --> /usr/share/ca-certificates/170742.pem (1708 bytes)
	I1217 01:30:41.046143   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:30:41.083364   55831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22168-12839/.minikube/certs/17074.pem --> /usr/share/ca-certificates/17074.pem (1338 bytes)
	I1217 01:30:41.119367   55831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:30:41.148621   55831 ssh_runner.go:195] Run: openssl version
	I1217 01:30:41.156675   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.172461   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/17074.pem /etc/ssl/certs/17074.pem
	I1217 01:30:41.188236   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.194693   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:18 /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.194767   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17074.pem
	I1217 01:30:41.203719   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:30:41.220299   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.236087   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/170742.pem /etc/ssl/certs/170742.pem
	I1217 01:30:41.251908   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.258448   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:18 /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.258512   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/170742.pem
	I1217 01:30:41.269418   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:30:41.287477   55831 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.300413   55831 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:30:41.313271   55831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.319461   55831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:06 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.319530   55831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:30:41.327881   55831 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:30:41.344267   55831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:30:41.350375   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:30:41.359771   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:30:41.368879   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:30:41.377291   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:30:41.387445   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:30:41.396912   55831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:30:41.405959   55831 kubeadm.go:401] StartCluster: {Name:pause-716229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-716229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.9 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gp
u-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:30:41.406139   55831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 01:30:41.406227   55831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 01:30:41.455737   55831 cri.go:89] found id: "2b923cc6453c863412d08cc49a0d17451f0fe4f0ef72a6c2dae9970574e5668f"
	I1217 01:30:41.455760   55831 cri.go:89] found id: "c6661edccb3b25fc75fc44ed63529a477c27e51decbe411700030f58380f028d"
	I1217 01:30:41.455766   55831 cri.go:89] found id: "4ab70530751bb7195a9e9385ea81c60aca8226c38f366f74a8ade07361033002"
	I1217 01:30:41.455771   55831 cri.go:89] found id: "b7b6956036af3c69a90a6e5dd61d14124fa30850b8ec8db991c70d667888a542"
	I1217 01:30:41.455776   55831 cri.go:89] found id: "8016a5f4fa0b7c8ceda82ce8e8e6d276852bea59b597f635afd89296a9090632"
	I1217 01:30:41.455781   55831 cri.go:89] found id: "a7fbae2a502d7025e987f1bd5ae191db5709dff042be3cf7250d266712f0d834"
	I1217 01:30:41.455785   55831 cri.go:89] found id: ""
	I1217 01:30:41.455835   55831 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-716229 -n pause-716229
helpers_test.go:270: (dbg) Run:  kubectl --context pause-716229 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (44.34s)

                                                
                                    

Test pass (365/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.46
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.18
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 2.56
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.18
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 3.37
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.67
31 TestOffline 78.81
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 134.43
40 TestAddons/serial/GCPAuth/Namespaces 0.34
41 TestAddons/serial/GCPAuth/FakeCredentials 10.72
44 TestAddons/parallel/Registry 17.22
45 TestAddons/parallel/RegistryCreds 0.77
47 TestAddons/parallel/InspektorGadget 10.89
48 TestAddons/parallel/MetricsServer 6.35
50 TestAddons/parallel/CSI 52.89
51 TestAddons/parallel/Headlamp 18.39
52 TestAddons/parallel/CloudSpanner 6.62
53 TestAddons/parallel/LocalPath 8.21
54 TestAddons/parallel/NvidiaDevicePlugin 6.68
55 TestAddons/parallel/Yakd 11.85
57 TestAddons/StoppedEnableDisable 90.25
58 TestCertOptions 63.09
59 TestCertExpiration 307.04
61 TestForceSystemdFlag 65.93
62 TestForceSystemdEnv 61.09
67 TestErrorSpam/setup 40.75
68 TestErrorSpam/start 0.36
69 TestErrorSpam/status 0.72
70 TestErrorSpam/pause 1.58
71 TestErrorSpam/unpause 1.9
72 TestErrorSpam/stop 5.13
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 77.95
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 53.87
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.08
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.07
84 TestFunctional/serial/CacheCmd/cache/add_local 1.12
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 36.03
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.36
95 TestFunctional/serial/LogsFileCmd 1.34
96 TestFunctional/serial/InvalidService 4.1
98 TestFunctional/parallel/ConfigCmd 0.4
99 TestFunctional/parallel/DashboardCmd 11.86
100 TestFunctional/parallel/DryRun 0.25
101 TestFunctional/parallel/InternationalLanguage 0.13
102 TestFunctional/parallel/StatusCmd 0.81
106 TestFunctional/parallel/ServiceCmdConnect 9.51
107 TestFunctional/parallel/AddonsCmd 0.17
108 TestFunctional/parallel/PersistentVolumeClaim 31.67
110 TestFunctional/parallel/SSHCmd 0.34
111 TestFunctional/parallel/CpCmd 1.25
112 TestFunctional/parallel/MySQL 34.49
113 TestFunctional/parallel/FileSync 0.21
114 TestFunctional/parallel/CertSync 1.27
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
122 TestFunctional/parallel/License 0.3
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
134 TestFunctional/parallel/ProfileCmd/profile_list 0.35
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
136 TestFunctional/parallel/MountCmd/any-port 8.05
137 TestFunctional/parallel/ServiceCmd/List 0.26
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.27
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
140 TestFunctional/parallel/ServiceCmd/Format 0.31
141 TestFunctional/parallel/ServiceCmd/URL 0.3
142 TestFunctional/parallel/MountCmd/specific-port 1.51
143 TestFunctional/parallel/Version/short 0.06
144 TestFunctional/parallel/Version/components 0.44
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
149 TestFunctional/parallel/ImageCommands/ImageBuild 3.95
150 TestFunctional/parallel/ImageCommands/Setup 0.48
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.51
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
155 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
156 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
157 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.67
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.62
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 75.71
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 197.8
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.08
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.08
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.04
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.52
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.27
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.26
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 3.52
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.44
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.22
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.11
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.78
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.14
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.33
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.18
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.19
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.16
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.07
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.39
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.28
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.08
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.07
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.45
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.18
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.2
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.2
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.18
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.22
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.19
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.42
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.35
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.88
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.32
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.31
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 0.97
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.5
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.46
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.76
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.54
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.09
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.13
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.22
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.22
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 187.33
262 TestMultiControlPlane/serial/DeployApp 6.25
263 TestMultiControlPlane/serial/PingHostFromPods 1.37
264 TestMultiControlPlane/serial/AddWorkerNode 45.66
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.71
267 TestMultiControlPlane/serial/CopyFile 11.16
268 TestMultiControlPlane/serial/StopSecondaryNode 80.75
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
270 TestMultiControlPlane/serial/RestartSecondaryNode 38.23
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 386.02
273 TestMultiControlPlane/serial/DeleteSecondaryNode 19.21
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
275 TestMultiControlPlane/serial/StopCluster 237.2
276 TestMultiControlPlane/serial/RestartCluster 91.31
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
278 TestMultiControlPlane/serial/AddSecondaryNode 72.78
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.69
284 TestJSONOutput/start/Command 83.04
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.72
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.66
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 8.22
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.25
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 76.16
316 TestMountStart/serial/StartWithMountFirst 20.93
317 TestMountStart/serial/VerifyMountFirst 0.31
318 TestMountStart/serial/StartWithMountSecond 19.42
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.67
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.35
323 TestMountStart/serial/RestartStopped 17.94
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 95.55
328 TestMultiNode/serial/DeployApp2Nodes 5.31
329 TestMultiNode/serial/PingHostFrom2Pods 0.91
330 TestMultiNode/serial/AddNode 42.35
331 TestMultiNode/serial/MultiNodeLabels 0.07
332 TestMultiNode/serial/ProfileList 0.46
333 TestMultiNode/serial/CopyFile 6.02
334 TestMultiNode/serial/StopNode 2.29
335 TestMultiNode/serial/StartAfterStop 38.71
336 TestMultiNode/serial/RestartKeepsNodes 301.14
337 TestMultiNode/serial/DeleteNode 2.6
338 TestMultiNode/serial/StopMultiNode 163.89
339 TestMultiNode/serial/RestartMultiNode 81.72
340 TestMultiNode/serial/ValidateNameConflict 40.23
347 TestScheduledStopUnix 110.37
351 TestRunningBinaryUpgrade 365.49
353 TestKubernetesUpgrade 158.97
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
357 TestNoKubernetes/serial/StartWithK8s 100.58
358 TestNoKubernetes/serial/StartWithStopK8s 33.59
359 TestNoKubernetes/serial/Start 42.61
360 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
361 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
362 TestNoKubernetes/serial/ProfileList 1.2
363 TestNoKubernetes/serial/Stop 1.46
364 TestNoKubernetes/serial/StartNoArgs 37.16
365 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
366 TestStoppedBinaryUpgrade/Setup 0.65
367 TestStoppedBinaryUpgrade/Upgrade 90.74
379 TestPause/serial/Start 81.2
384 TestNetworkPlugins/group/false 5.13
385 TestStoppedBinaryUpgrade/MinikubeLogs 1.49
386 TestISOImage/Setup 34.75
391 TestStartStop/group/old-k8s-version/serial/FirstStart 86.22
393 TestISOImage/Binaries/crictl 0.18
394 TestISOImage/Binaries/curl 0.18
395 TestISOImage/Binaries/docker 0.19
396 TestISOImage/Binaries/git 0.21
397 TestISOImage/Binaries/iptables 0.2
398 TestISOImage/Binaries/podman 0.19
399 TestISOImage/Binaries/rsync 0.2
400 TestISOImage/Binaries/socat 0.18
401 TestISOImage/Binaries/wget 0.18
402 TestISOImage/Binaries/VBoxControl 0.18
403 TestISOImage/Binaries/VBoxService 0.2
405 TestStartStop/group/no-preload/serial/FirstStart 114.38
407 TestStartStop/group/old-k8s-version/serial/DeployApp 10.38
408 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.43
409 TestStartStop/group/old-k8s-version/serial/Stop 84.02
411 TestStartStop/group/embed-certs/serial/FirstStart 80.08
412 TestStartStop/group/no-preload/serial/DeployApp 8.38
413 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
414 TestStartStop/group/no-preload/serial/Stop 84.54
416 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.46
417 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
418 TestStartStop/group/old-k8s-version/serial/SecondStart 57.01
419 TestStartStop/group/embed-certs/serial/DeployApp 10.36
420 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
421 TestStartStop/group/embed-certs/serial/Stop 71.18
422 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.34
423 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
424 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
425 TestStartStop/group/no-preload/serial/SecondStart 59.79
426 TestStartStop/group/default-k8s-diff-port/serial/Stop 89.01
427 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 14.01
428 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
429 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
430 TestStartStop/group/old-k8s-version/serial/Pause 2.74
432 TestStartStop/group/newest-cni/serial/FirstStart 40.97
433 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
434 TestStartStop/group/embed-certs/serial/SecondStart 45.26
435 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
436 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
437 TestStartStop/group/newest-cni/serial/DeployApp 0
438 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.47
439 TestStartStop/group/newest-cni/serial/Stop 7.35
440 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
441 TestStartStop/group/no-preload/serial/Pause 2.75
442 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
443 TestStartStop/group/newest-cni/serial/SecondStart 34.1
444 TestNetworkPlugins/group/auto/Start 95.73
445 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
446 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
447 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 70.41
448 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
449 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
450 TestStartStop/group/embed-certs/serial/Pause 2.8
451 TestNetworkPlugins/group/kindnet/Start 83.09
452 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
453 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
454 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
455 TestStartStop/group/newest-cni/serial/Pause 3.58
456 TestNetworkPlugins/group/calico/Start 118.21
457 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.01
458 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
459 TestNetworkPlugins/group/auto/KubeletFlags 0.21
460 TestNetworkPlugins/group/auto/NetCatPod 13.33
461 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
462 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.35
463 TestNetworkPlugins/group/custom-flannel/Start 74.69
464 TestNetworkPlugins/group/auto/DNS 0.17
465 TestNetworkPlugins/group/auto/Localhost 0.17
466 TestNetworkPlugins/group/auto/HairPin 0.17
467 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
468 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
469 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
470 TestNetworkPlugins/group/enable-default-cni/Start 85.79
471 TestNetworkPlugins/group/kindnet/DNS 0.23
472 TestNetworkPlugins/group/kindnet/Localhost 0.16
473 TestNetworkPlugins/group/kindnet/HairPin 0.18
474 TestNetworkPlugins/group/flannel/Start 73.17
475 TestNetworkPlugins/group/calico/ControllerPod 6.01
476 TestNetworkPlugins/group/calico/KubeletFlags 0.22
477 TestNetworkPlugins/group/calico/NetCatPod 11.29
478 TestNetworkPlugins/group/calico/DNS 0.19
479 TestNetworkPlugins/group/calico/Localhost 0.18
480 TestNetworkPlugins/group/calico/HairPin 0.16
481 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
482 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
483 TestNetworkPlugins/group/custom-flannel/DNS 0.19
484 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
485 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
486 TestNetworkPlugins/group/bridge/Start 83.94
488 TestISOImage/PersistentMounts//data 0.17
489 TestISOImage/PersistentMounts//var/lib/docker 0.17
490 TestISOImage/PersistentMounts//var/lib/cni 0.17
491 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
492 TestISOImage/PersistentMounts//var/lib/minikube 0.18
493 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
494 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
495 TestISOImage/VersionJSON 0.16
496 TestISOImage/eBPFSupport 0.17
497 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
498 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.64
499 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
500 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
501 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
502 TestNetworkPlugins/group/flannel/ControllerPod 6.01
503 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
504 TestNetworkPlugins/group/flannel/NetCatPod 11.27
505 TestNetworkPlugins/group/flannel/DNS 0.13
506 TestNetworkPlugins/group/flannel/Localhost 0.12
507 TestNetworkPlugins/group/flannel/HairPin 0.13
508 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
509 TestNetworkPlugins/group/bridge/NetCatPod 10.23
510 TestNetworkPlugins/group/bridge/DNS 0.15
511 TestNetworkPlugins/group/bridge/Localhost 0.12
512 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (7.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-498547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-498547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.455007387s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 00:06:20.795965   17074 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 00:06:20.796069   17074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-498547
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-498547: exit status 85 (77.040654ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-498547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-498547 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:06:13
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:06:13.396498   17085 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:13.396768   17085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:13.396779   17085 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:13.396784   17085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:13.396991   17085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	W1217 00:06:13.397127   17085 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22168-12839/.minikube/config/config.json: open /home/jenkins/minikube-integration/22168-12839/.minikube/config/config.json: no such file or directory
	I1217 00:06:13.397603   17085 out.go:368] Setting JSON to true
	I1217 00:06:13.398595   17085 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2919,"bootTime":1765927054,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:06:13.398658   17085 start.go:143] virtualization: kvm guest
	I1217 00:06:13.404471   17085 out.go:99] [download-only-498547] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1217 00:06:13.404668   17085 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 00:06:13.404686   17085 notify.go:221] Checking for updates...
	I1217 00:06:13.405887   17085 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:06:13.407110   17085 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:06:13.408602   17085 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:06:13.410257   17085 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:06:13.411702   17085 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 00:06:13.414266   17085 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:06:13.414539   17085 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:06:13.968109   17085 out.go:99] Using the kvm2 driver based on user configuration
	I1217 00:06:13.968171   17085 start.go:309] selected driver: kvm2
	I1217 00:06:13.968178   17085 start.go:927] validating driver "kvm2" against <nil>
	I1217 00:06:13.968573   17085 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:06:13.969099   17085 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1217 00:06:13.969266   17085 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:06:13.969288   17085 cni.go:84] Creating CNI manager for ""
	I1217 00:06:13.969335   17085 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1217 00:06:13.969343   17085 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:06:13.969381   17085 start.go:353] cluster config:
	{Name:download-only-498547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-498547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:06:13.969562   17085 iso.go:125] acquiring lock: {Name:mk94a221d1243bc618ab687e91468d7a3f9fe960 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:06:13.971173   17085 out.go:99] Downloading VM boot image ...
	I1217 00:06:13.971218   17085 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22168-12839/.minikube/cache/iso/amd64/minikube-v1.37.0-1765846775-22141-amd64.iso
	I1217 00:06:17.319434   17085 out.go:99] Starting "download-only-498547" primary control-plane node in "download-only-498547" cluster
	I1217 00:06:17.319479   17085 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:06:17.335614   17085 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1217 00:06:17.335639   17085 cache.go:65] Caching tarball of preloaded images
	I1217 00:06:17.335820   17085 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 00:06:17.337820   17085 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 00:06:17.337847   17085 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1217 00:06:17.358631   17085 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1217 00:06:17.358780   17085 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-498547 host does not exist
	  To start a cluster, run: "minikube start -p download-only-498547"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-498547
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-823636 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-823636 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (2.560136659s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1217 00:06:23.766460   17074 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1217 00:06:23.766501   17074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-823636
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-823636: exit status 85 (80.354042ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-498547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-498547 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ delete  │ -p download-only-498547                                                                                                                                                 │ download-only-498547 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ start   │ -o=json --download-only -p download-only-823636 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-823636 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:06:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:06:21.259335   17297 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:21.259613   17297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:21.259624   17297 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:21.259628   17297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:21.259876   17297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:06:21.260390   17297 out.go:368] Setting JSON to true
	I1217 00:06:21.261242   17297 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2927,"bootTime":1765927054,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:06:21.261294   17297 start.go:143] virtualization: kvm guest
	I1217 00:06:21.263786   17297 out.go:99] [download-only-823636] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:06:21.263972   17297 notify.go:221] Checking for updates...
	I1217 00:06:21.265512   17297 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:06:21.267231   17297 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:06:21.268751   17297 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:06:21.270187   17297 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:06:21.271422   17297 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-823636 host does not exist
	  To start a cluster, run: "minikube start -p download-only-823636"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-823636
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (3.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-330283 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-330283 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.372343087s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (3.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1217 00:06:27.548154   17074 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1217 00:06:27.548186   17074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-330283
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-330283: exit status 85 (74.874393ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-498547 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-498547 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ delete  │ -p download-only-498547                                                                                                                                                        │ download-only-498547 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ start   │ -o=json --download-only -p download-only-823636 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-823636 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ delete  │ -p download-only-823636                                                                                                                                                        │ download-only-823636 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │ 17 Dec 25 00:06 UTC │
	│ start   │ -o=json --download-only -p download-only-330283 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-330283 │ jenkins │ v1.37.0 │ 17 Dec 25 00:06 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:06:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:06:24.235710   17458 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:06:24.235987   17458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:24.236000   17458 out.go:374] Setting ErrFile to fd 2...
	I1217 00:06:24.236005   17458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:06:24.236318   17458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:06:24.236966   17458 out.go:368] Setting JSON to true
	I1217 00:06:24.238163   17458 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2930,"bootTime":1765927054,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:06:24.238240   17458 start.go:143] virtualization: kvm guest
	I1217 00:06:24.240412   17458 out.go:99] [download-only-330283] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:06:24.240631   17458 notify.go:221] Checking for updates...
	I1217 00:06:24.242068   17458 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:06:24.243692   17458 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:06:24.245226   17458 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:06:24.246593   17458 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:06:24.247917   17458 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-330283 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330283"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-330283
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 00:06:28.417180   17074 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-467623 --alsologtostderr --binary-mirror http://127.0.0.1:43951 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-467623" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-467623
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (78.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-331714 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-331714 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.942044565s)
helpers_test.go:176: Cleaning up "offline-crio-331714" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-331714
--- PASS: TestOffline (78.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-262069
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-262069: exit status 85 (81.105102ms)

                                                
                                                
-- stdout --
	* Profile "addons-262069" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-262069"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-262069
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-262069: exit status 85 (62.143389ms)

                                                
                                                
-- stdout --
	* Profile "addons-262069" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-262069"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (134.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-262069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-262069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.427366954s)
--- PASS: TestAddons/Setup (134.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-262069 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-262069 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.72s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-262069 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-262069 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8a6b0152-c8cd-4b61-8658-a844c2dedd65] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8a6b0152-c8cd-4b61-8658-a844c2dedd65] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.007673754s
addons_test.go:696: (dbg) Run:  kubectl --context addons-262069 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-262069 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-262069 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 12.622665ms
I1217 00:09:03.520269   17074 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:09:03.520298   17074 kapi.go:107] duration metric: took 12.660364ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-z9bzt" [15209453-1113-446e-94b5-19d615f67036] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006311991s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ng2lx" [f39654e9-51f3-4325-9568-3999f3904260] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006651305s
addons_test.go:394: (dbg) Run:  kubectl --context addons-262069 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-262069 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-262069 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.199799995s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 ip
2025/12/17 00:09:19 [DEBUG] GET http://192.168.39.183:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.22s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 7.392827ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-262069
addons_test.go:334: (dbg) Run:  kubectl --context addons-262069 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-2h4q8" [b1e47942-9432-4fe4-841f-6b0a0984071b] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006310635s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable inspektor-gadget --alsologtostderr -v=1: (5.887157007s)
--- PASS: TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 13.865202ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-94n2m" [9b665994-667f-4a3b-b44d-9949b0c4761c] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005115391s
addons_test.go:465: (dbg) Run:  kubectl --context addons-262069 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable metrics-server --alsologtostderr -v=1: (1.211202498s)
--- PASS: TestAddons/parallel/MetricsServer (6.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 00:09:03.507651   17074 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 12.671622ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-262069 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-262069 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [bca7779f-2ee0-4d14-a972-6a630bccbf19] Pending
helpers_test.go:353: "task-pv-pod" [bca7779f-2ee0-4d14-a972-6a630bccbf19] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [bca7779f-2ee0-4d14-a972-6a630bccbf19] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.006374654s
addons_test.go:574: (dbg) Run:  kubectl --context addons-262069 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-262069 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-262069 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-262069 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-262069 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-262069 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-262069 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [d148b5c7-6805-4a83-b166-0aa6d956d55d] Pending
helpers_test.go:353: "task-pv-pod-restore" [d148b5c7-6805-4a83-b166-0aa6d956d55d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [d148b5c7-6805-4a83-b166-0aa6d956d55d] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00441748s
addons_test.go:616: (dbg) Run:  kubectl --context addons-262069 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-262069 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-262069 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.116183283s)
--- PASS: TestAddons/parallel/CSI (52.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-262069 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-262069 --alsologtostderr -v=1: (1.000709761s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-fthn5" [ac6d67e2-85de-4b33-a4ca-b9b4700a4f48] Pending
helpers_test.go:353: "headlamp-dfcdc64b-fthn5" [ac6d67e2-85de-4b33-a4ca-b9b4700a4f48] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-fthn5" [ac6d67e2-85de-4b33-a4ca-b9b4700a4f48] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.130175799s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable headlamp --alsologtostderr -v=1: (6.254721715s)
--- PASS: TestAddons/parallel/Headlamp (18.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-rwp8c" [1a3bd962-dc84-49d0-a0ed-b3305665e46c] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003500394s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.21s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-262069 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-262069 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-262069 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [b4c6b389-bb72-4daa-9f5a-680d1e6345e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [b4c6b389-bb72-4daa-9f5a-680d1e6345e5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [b4c6b389-bb72-4daa-9f5a-680d1e6345e5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005078577s
addons_test.go:969: (dbg) Run:  kubectl --context addons-262069 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 ssh "cat /opt/local-path-provisioner/pvc-3eafbabf-bda1-4678-87d0-9af3d5bc37b7_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-262069 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-262069 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.21s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-wb64t" [7e312275-8868-442b-bb94-0569b43cbe03] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007624195s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-dctzp" [b0327f09-ecae-4c1d-9db3-7689de735742] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006480531s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-262069 addons disable yakd --alsologtostderr -v=1: (5.841337079s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (90.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-262069
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-262069: (1m30.033342981s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-262069
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-262069
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-262069
--- PASS: TestAddons/StoppedEnableDisable (90.25s)

                                                
                                    
x
+
TestCertOptions (63.09s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-545708 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-545708 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m1.744484449s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-545708 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-545708 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-545708 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-545708" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-545708
--- PASS: TestCertOptions (63.09s)

                                                
                                    
x
+
TestCertExpiration (307.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-656320 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-656320 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m19.6895291s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-656320 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-656320 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (46.455165598s)
helpers_test.go:176: Cleaning up "cert-expiration-656320" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-656320
--- PASS: TestCertExpiration (307.04s)

                                                
                                    
x
+
TestForceSystemdFlag (65.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-447747 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-447747 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m4.893212416s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-447747 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-447747" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-447747
--- PASS: TestForceSystemdFlag (65.93s)

                                                
                                    
x
+
TestForceSystemdEnv (61.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-409641 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-409641 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.338464481s)
helpers_test.go:176: Cleaning up "force-systemd-env-409641" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-409641
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-409641: (1.747312691s)
--- PASS: TestForceSystemdEnv (61.09s)

                                                
                                    
x
+
TestErrorSpam/setup (40.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-758271 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-758271 --driver=kvm2  --container-runtime=crio
E1217 00:13:44.661553   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:44.668588   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:44.679990   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:44.701506   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:44.743090   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:44.824650   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:44.986231   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:45.308012   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:45.950163   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:47.231809   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:49.794170   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:13:54.915925   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-758271 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-758271 --driver=kvm2  --container-runtime=crio: (40.745153656s)
--- PASS: TestErrorSpam/setup (40.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 unpause
E1217 00:14:05.157448   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (5.13s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 stop: (2.04889321s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 stop: (1.196033793s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-758271 --log_dir /tmp/nospam-758271 stop: (1.885505388s)
--- PASS: TestErrorSpam/stop (5.13s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/test/nested/copy/17074/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-069802 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1217 00:14:25.639008   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:06.602072   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-069802 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m17.952393624s)
--- PASS: TestFunctional/serial/StartWithProxy (77.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 00:15:29.475833   17074 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-069802 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-069802 --alsologtostderr -v=8: (53.868005243s)
functional_test.go:678: soft start took 53.868736481s for "functional-069802" cluster.
I1217 00:16:23.344245   17074 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (53.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-069802 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 cache add registry.k8s.io/pause:3.1: (1.031587603s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 cache add registry.k8s.io/pause:3.3: (1.009826843s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 cache add registry.k8s.io/pause:latest: (1.033087592s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-069802 /tmp/TestFunctionalserialCacheCmdcacheadd_local136696932/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cache add minikube-local-cache-test:functional-069802
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cache delete minikube-local-cache-test:functional-069802
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-069802
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.76667ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cache reload
E1217 00:16:28.523835   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 kubectl -- --context functional-069802 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-069802 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-069802 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-069802 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.030725891s)
functional_test.go:776: restart took 36.030823539s for "functional-069802" cluster.
I1217 00:17:05.904489   17074 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (36.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-069802 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 logs: (1.3551501s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 logs --file /tmp/TestFunctionalserialLogsFileCmd1974317867/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 logs --file /tmp/TestFunctionalserialLogsFileCmd1974317867/001/logs.txt: (1.337710833s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-069802 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-069802
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-069802: exit status 115 (245.248944ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.21:31494 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-069802 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 config get cpus: exit status 14 (63.748421ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 config get cpus: exit status 14 (64.153543ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-069802 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-069802 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 23105: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-069802 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-069802 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (123.983613ms)

                                                
                                                
-- stdout --
	* [functional-069802] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:17:23.066478   22888 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:17:23.066727   22888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:17:23.066737   22888 out.go:374] Setting ErrFile to fd 2...
	I1217 00:17:23.066743   22888 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:17:23.066938   22888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:17:23.067427   22888 out.go:368] Setting JSON to false
	I1217 00:17:23.068289   22888 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3589,"bootTime":1765927054,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:17:23.068348   22888 start.go:143] virtualization: kvm guest
	I1217 00:17:23.070393   22888 out.go:179] * [functional-069802] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:17:23.071876   22888 notify.go:221] Checking for updates...
	I1217 00:17:23.071907   22888 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:17:23.074615   22888 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:17:23.076284   22888 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:17:23.077677   22888 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:17:23.079465   22888 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:17:23.080741   22888 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:17:23.082610   22888 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:17:23.083334   22888 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:17:23.116670   22888 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:17:23.118130   22888 start.go:309] selected driver: kvm2
	I1217 00:17:23.118146   22888 start.go:927] validating driver "kvm2" against &{Name:functional-069802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-069802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:17:23.118255   22888 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:17:23.122140   22888 out.go:203] 
	W1217 00:17:23.123783   22888 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:17:23.126009   22888 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-069802 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-069802 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-069802 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (126.097762ms)

                                                
                                                
-- stdout --
	* [functional-069802] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:17:22.943574   22860 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:17:22.943691   22860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:17:22.943704   22860 out.go:374] Setting ErrFile to fd 2...
	I1217 00:17:22.943711   22860 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:17:22.944206   22860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:17:22.944816   22860 out.go:368] Setting JSON to false
	I1217 00:17:22.946011   22860 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3589,"bootTime":1765927054,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:17:22.946121   22860 start.go:143] virtualization: kvm guest
	I1217 00:17:22.948228   22860 out.go:179] * [functional-069802] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:17:22.949814   22860 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:17:22.949816   22860 notify.go:221] Checking for updates...
	I1217 00:17:22.952780   22860 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:17:22.954153   22860 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:17:22.955412   22860 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:17:22.956523   22860 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:17:22.957744   22860 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:17:22.959705   22860 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:17:22.960430   22860 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:17:22.996692   22860 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:17:22.997971   22860 start.go:309] selected driver: kvm2
	I1217 00:17:22.997986   22860 start.go:927] validating driver "kvm2" against &{Name:functional-069802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-069802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.21 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:17:22.998137   22860 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:17:23.000293   22860 out.go:203] 
	W1217 00:17:23.001819   22860 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:17:23.003064   22860 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-069802 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-069802 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-68pl6" [f6b95b6f-f080-411e-9379-6d085bd4633d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-68pl6" [f6b95b6f-f080-411e-9379-6d085bd4633d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.018147092s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.21:30676
functional_test.go:1680: http://192.168.39.21:30676: success! body:
Request served by hello-node-connect-7d85dfc575-68pl6

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.21:30676
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [479b1b07-714e-44e6-ac16-1a2a892630c3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006280175s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-069802 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-069802 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-069802 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-069802 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:17:19.273342   17074 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [873032d8-ef8c-44a9-8d5a-51bc50f6434f] Pending
helpers_test.go:353: "sp-pod" [873032d8-ef8c-44a9-8d5a-51bc50f6434f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [873032d8-ef8c-44a9-8d5a-51bc50f6434f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00851419s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-069802 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-069802 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-069802 delete -f testdata/storage-provisioner/pod.yaml: (2.655484282s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-069802 apply -f testdata/storage-provisioner/pod.yaml
I1217 00:17:31.340953   17074 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [31c72214-2f1d-4475-b8bc-19dccf7a5bea] Pending
helpers_test.go:353: "sp-pod" [31c72214-2f1d-4475-b8bc-19dccf7a5bea] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [31c72214-2f1d-4475-b8bc-19dccf7a5bea] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.007421433s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-069802 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh -n functional-069802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cp functional-069802:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2757609949/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh -n functional-069802 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh -n functional-069802 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-069802 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-gv88w" [7425b136-0326-41c0-a54d-086946145a0c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-gv88w" [7425b136-0326-41c0-a54d-086946145a0c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 28.460577179s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;": exit status 1 (145.137561ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:17:56.494152   17074 retry.go:31] will retry after 705.82369ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;": exit status 1 (174.822635ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:17:57.376126   17074 retry.go:31] will retry after 1.973206774s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;": exit status 1 (126.561276ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:17:59.477983   17074 retry.go:31] will retry after 2.548361879s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-069802 exec mysql-6bcdcbc558-gv88w -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/17074/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /etc/test/nested/copy/17074/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/17074.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /etc/ssl/certs/17074.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/17074.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /usr/share/ca-certificates/17074.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/170742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /etc/ssl/certs/170742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/170742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /usr/share/ca-certificates/170742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-069802 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh "sudo systemctl is-active docker": exit status 1 (176.90335ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh "sudo systemctl is-active containerd": exit status 1 (207.951739ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-069802 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-069802 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-8k4t6" [e502c704-8c68-473f-8815-6972f4faed05] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-8k4t6" [e502c704-8c68-473f-8815-6972f4faed05] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004188825s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "294.751134ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.846087ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "293.189947ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.272283ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdany-port550211723/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765930635043973604" to /tmp/TestFunctionalparallelMountCmdany-port550211723/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765930635043973604" to /tmp/TestFunctionalparallelMountCmdany-port550211723/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765930635043973604" to /tmp/TestFunctionalparallelMountCmdany-port550211723/001/test-1765930635043973604
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (161.002671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:17:15.205329   17074 retry.go:31] will retry after 455.578158ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 00:17 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 00:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 00:17 test-1765930635043973604
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh cat /mount-9p/test-1765930635043973604
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-069802 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [9ed79969-9348-4aaa-a3dc-d0e99489599c] Pending
helpers_test.go:353: "busybox-mount" [9ed79969-9348-4aaa-a3dc-d0e99489599c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [9ed79969-9348-4aaa-a3dc-d0e99489599c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [9ed79969-9348-4aaa-a3dc-d0e99489599c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.002991811s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-069802 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdany-port550211723/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 service list -o json
functional_test.go:1504: Took "267.995445ms" to run "out/minikube-linux-amd64 -p functional-069802 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.21:30738
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.21:30738
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdspecific-port393505826/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (185.426288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:17:23.276979   17074 retry.go:31] will retry after 575.788906ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdspecific-port393505826/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh "sudo umount -f /mount-9p": exit status 1 (191.024238ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-069802 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdspecific-port393505826/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-069802 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-069802
localhost/kicbase/echo-server:functional-069802
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-069802 image ls --format short --alsologtostderr:
I1217 00:17:34.600798   23524 out.go:360] Setting OutFile to fd 1 ...
I1217 00:17:34.601072   23524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:34.601082   23524 out.go:374] Setting ErrFile to fd 2...
I1217 00:17:34.601089   23524 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:34.601279   23524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:17:34.601782   23524 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:34.601901   23524 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:34.603892   23524 ssh_runner.go:195] Run: systemctl --version
I1217 00:17:34.606007   23524 main.go:143] libmachine: domain functional-069802 has defined MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:34.606386   23524 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:d7:24", ip: ""} in network mk-functional-069802: {Iface:virbr1 ExpiryTime:2025-12-17 01:14:26 +0000 UTC Type:0 Mac:52:54:00:10:d7:24 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:functional-069802 Clientid:01:52:54:00:10:d7:24}
I1217 00:17:34.606415   23524 main.go:143] libmachine: domain functional-069802 has defined IP address 192.168.39.21 and MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:34.606556   23524 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-069802/id_rsa Username:docker}
I1217 00:17:34.690421   23524 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-069802 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-069802  │ fdeb3cf48ad96 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-069802  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-069802 image ls --format table --alsologtostderr:
I1217 00:17:35.385251   23580 out.go:360] Setting OutFile to fd 1 ...
I1217 00:17:35.385362   23580 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:35.385370   23580 out.go:374] Setting ErrFile to fd 2...
I1217 00:17:35.385374   23580 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:35.385552   23580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:17:35.386085   23580 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:35.386172   23580 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:35.388368   23580 ssh_runner.go:195] Run: systemctl --version
I1217 00:17:35.390761   23580 main.go:143] libmachine: domain functional-069802 has defined MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:35.391208   23580 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:d7:24", ip: ""} in network mk-functional-069802: {Iface:virbr1 ExpiryTime:2025-12-17 01:14:26 +0000 UTC Type:0 Mac:52:54:00:10:d7:24 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:functional-069802 Clientid:01:52:54:00:10:d7:24}
I1217 00:17:35.391236   23580 main.go:143] libmachine: domain functional-069802 has defined IP address 192.168.39.21 and MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:35.391468   23580 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-069802/id_rsa Username:docker}
I1217 00:17:35.484000   23580 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-069802 image ls --format json --alsologtostderr:
[{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k
8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-069802"],"size":"4944818"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a0
86b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kin
dnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e
854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"fdeb3cf48ad9679b364e8deba7195e98ea961267e4f7508433f750d27c318e4e","repoDigests":["localhost/minikube-local-cache-test@sha256:d3f1d938016158687e2c0260ee06ac6af7ec1816fdb1b317a8cae035c182486c"],"repoTags":["localhost/minikube-local-cache-test:functional-069802"],"size":"3328"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha
256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518
083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-069802 image ls --format json --alsologtostderr:
I1217 00:17:35.168550   23564 out.go:360] Setting OutFile to fd 1 ...
I1217 00:17:35.169861   23564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:35.169994   23564 out.go:374] Setting ErrFile to fd 2...
I1217 00:17:35.170007   23564 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:35.170264   23564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:17:35.170877   23564 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:35.170990   23564 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:35.173233   23564 ssh_runner.go:195] Run: systemctl --version
I1217 00:17:35.175982   23564 main.go:143] libmachine: domain functional-069802 has defined MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:35.176476   23564 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:d7:24", ip: ""} in network mk-functional-069802: {Iface:virbr1 ExpiryTime:2025-12-17 01:14:26 +0000 UTC Type:0 Mac:52:54:00:10:d7:24 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:functional-069802 Clientid:01:52:54:00:10:d7:24}
I1217 00:17:35.176508   23564 main.go:143] libmachine: domain functional-069802 has defined IP address 192.168.39.21 and MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:35.176686   23564 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-069802/id_rsa Username:docker}
I1217 00:17:35.266422   23564 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls --format yaml --alsologtostderr
2025/12/17 00:17:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-069802 image ls --format yaml --alsologtostderr:
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: fdeb3cf48ad9679b364e8deba7195e98ea961267e4f7508433f750d27c318e4e
repoDigests:
- localhost/minikube-local-cache-test@sha256:d3f1d938016158687e2c0260ee06ac6af7ec1816fdb1b317a8cae035c182486c
repoTags:
- localhost/minikube-local-cache-test:functional-069802
size: "3328"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-069802
size: "4944818"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-069802 image ls --format yaml --alsologtostderr:
I1217 00:17:34.785309   23535 out.go:360] Setting OutFile to fd 1 ...
I1217 00:17:34.785533   23535 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:34.785541   23535 out.go:374] Setting ErrFile to fd 2...
I1217 00:17:34.785544   23535 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:34.785739   23535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:17:34.786253   23535 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:34.786357   23535 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:34.788507   23535 ssh_runner.go:195] Run: systemctl --version
I1217 00:17:34.790575   23535 main.go:143] libmachine: domain functional-069802 has defined MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:34.790973   23535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:d7:24", ip: ""} in network mk-functional-069802: {Iface:virbr1 ExpiryTime:2025-12-17 01:14:26 +0000 UTC Type:0 Mac:52:54:00:10:d7:24 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:functional-069802 Clientid:01:52:54:00:10:d7:24}
I1217 00:17:34.791005   23535 main.go:143] libmachine: domain functional-069802 has defined IP address 192.168.39.21 and MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:34.791146   23535 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-069802/id_rsa Username:docker}
I1217 00:17:34.876819   23535 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh pgrep buildkitd: exit status 1 (157.439551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image build -t localhost/my-image:functional-069802 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 image build -t localhost/my-image:functional-069802 testdata/build --alsologtostderr: (3.198796183s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-069802 image build -t localhost/my-image:functional-069802 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8d3add01a9e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-069802
--> 4a651af1a8b
Successfully tagged localhost/my-image:functional-069802
4a651af1a8b2c630b9034f5d717f2575e34a8f02944133a891e01abd6ae1425c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-069802 image build -t localhost/my-image:functional-069802 testdata/build --alsologtostderr:
I1217 00:17:35.154789   23558 out.go:360] Setting OutFile to fd 1 ...
I1217 00:17:35.155131   23558 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:35.155143   23558 out.go:374] Setting ErrFile to fd 2...
I1217 00:17:35.155148   23558 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:17:35.155383   23558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:17:35.155984   23558 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:35.156731   23558 config.go:182] Loaded profile config "functional-069802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1217 00:17:35.159567   23558 ssh_runner.go:195] Run: systemctl --version
I1217 00:17:35.162258   23558 main.go:143] libmachine: domain functional-069802 has defined MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:35.162817   23558 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:10:d7:24", ip: ""} in network mk-functional-069802: {Iface:virbr1 ExpiryTime:2025-12-17 01:14:26 +0000 UTC Type:0 Mac:52:54:00:10:d7:24 Iaid: IPaddr:192.168.39.21 Prefix:24 Hostname:functional-069802 Clientid:01:52:54:00:10:d7:24}
I1217 00:17:35.162845   23558 main.go:143] libmachine: domain functional-069802 has defined IP address 192.168.39.21 and MAC address 52:54:00:10:d7:24 in network mk-functional-069802
I1217 00:17:35.163014   23558 sshutil.go:53] new ssh client: &{IP:192.168.39.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-069802/id_rsa Username:docker}
I1217 00:17:35.249444   23558 build_images.go:162] Building image from path: /tmp/build.3810117447.tar
I1217 00:17:35.249526   23558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:17:35.263835   23558 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3810117447.tar
I1217 00:17:35.270858   23558 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3810117447.tar: stat -c "%s %y" /var/lib/minikube/build/build.3810117447.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3810117447.tar': No such file or directory
I1217 00:17:35.270889   23558 ssh_runner.go:362] scp /tmp/build.3810117447.tar --> /var/lib/minikube/build/build.3810117447.tar (3072 bytes)
I1217 00:17:35.315495   23558 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3810117447
I1217 00:17:35.332400   23558 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3810117447 -xf /var/lib/minikube/build/build.3810117447.tar
I1217 00:17:35.351435   23558 crio.go:315] Building image: /var/lib/minikube/build/build.3810117447
I1217 00:17:35.351523   23558 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-069802 /var/lib/minikube/build/build.3810117447 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 00:17:38.238370   23558 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-069802 /var/lib/minikube/build/build.3810117447 --cgroup-manager=cgroupfs: (2.88681942s)
I1217 00:17:38.238447   23558 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3810117447
I1217 00:17:38.263483   23558 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3810117447.tar
I1217 00:17:38.286566   23558 build_images.go:218] Built localhost/my-image:functional-069802 from /tmp/build.3810117447.tar
I1217 00:17:38.286607   23558 build_images.go:134] succeeded building to: functional-069802
I1217 00:17:38.286613   23558 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-069802
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249979914/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249979914/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249979914/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T" /mount1: exit status 1 (224.116319ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:17:24.828251   17074 retry.go:31] will retry after 664.933831ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-069802 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249979914/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249979914/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-069802 /tmp/TestFunctionalparallelMountCmdVerifyCleanup249979914/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image load --daemon kicbase/echo-server:functional-069802 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 image load --daemon kicbase/echo-server:functional-069802 --alsologtostderr: (1.28045088s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image load --daemon kicbase/echo-server:functional-069802 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-069802
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image load --daemon kicbase/echo-server:functional-069802 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image save kicbase/echo-server:functional-069802 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 image save kicbase/echo-server:functional-069802 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.672968787s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image rm kicbase/echo-server:functional-069802 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-069802 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.375345051s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-069802
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-069802 image save --daemon kicbase/echo-server:functional-069802 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-069802
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-069802
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-069802
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-069802
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22168-12839/.minikube/files/etc/test/nested/copy/17074/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (75.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698418 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1217 00:18:44.661222   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:19:12.366109   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-698418 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m15.707265894s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (75.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (197.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1217 00:19:18.964647   17074 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698418 --alsologtostderr -v=8
E1217 00:22:12.863627   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:12.870105   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:12.881530   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:12.902941   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:12.944389   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:13.025850   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:13.187437   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:13.509154   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:14.151258   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:15.433104   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:17.996042   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:23.117781   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:22:33.359074   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-698418 --alsologtostderr -v=8: (3m17.803518737s)
functional_test.go:678: soft start took 3m17.803926571s for "functional-698418" cluster.
I1217 00:22:36.768567   17074 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (197.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-698418 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 cache add registry.k8s.io/pause:3.3: (1.048895572s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 cache add registry.k8s.io/pause:latest: (1.048943338s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2554098440/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cache add minikube-local-cache-test:functional-698418
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cache delete minikube-local-cache-test:functional-698418
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-698418
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (184.703183ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 kubectl -- --context functional-698418 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-698418 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs: (1.270418704s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1894844973/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs1894844973/001/logs.txt: (1.258852885s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-698418 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-698418
E1217 00:28:44.660832   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-698418: exit status 115 (233.497962ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.109:31156 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-698418 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (3.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 config get cpus: exit status 14 (68.025783ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 config get cpus: exit status 14 (71.886056ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (107.056292ms)

                                                
                                                
-- stdout --
	* [functional-698418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:56.097169   28906 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:32:56.097450   28906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:56.097466   28906 out.go:374] Setting ErrFile to fd 2...
	I1217 00:32:56.097473   28906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:56.097684   28906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:32:56.098154   28906 out.go:368] Setting JSON to false
	I1217 00:32:56.098982   28906 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4522,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:32:56.099045   28906 start.go:143] virtualization: kvm guest
	I1217 00:32:56.101378   28906 out.go:179] * [functional-698418] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 00:32:56.102681   28906 notify.go:221] Checking for updates...
	I1217 00:32:56.102706   28906 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:56.104101   28906 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:56.105719   28906 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:32:56.107116   28906 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:32:56.108391   28906 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:32:56.109803   28906 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:56.111411   28906 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:56.111906   28906 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:56.142486   28906 out.go:179] * Using the kvm2 driver based on existing profile
	I1217 00:32:56.143723   28906 start.go:309] selected driver: kvm2
	I1217 00:32:56.143738   28906 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:56.143849   28906 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:56.145860   28906 out.go:203] 
	W1217 00:32:56.147103   28906 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:32:56.148223   28906 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698418 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-698418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (112.191233ms)

                                                
                                                
-- stdout --
	* [functional-698418] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:34:58.017344   29429 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:34:58.017575   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017583   29429 out.go:374] Setting ErrFile to fd 2...
	I1217 00:34:58.017587   29429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:34:58.017835   29429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:34:58.018256   29429 out.go:368] Setting JSON to false
	I1217 00:34:58.019096   29429 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4644,"bootTime":1765927054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 00:34:58.019154   29429 start.go:143] virtualization: kvm guest
	I1217 00:34:58.021244   29429 out.go:179] * [functional-698418] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 00:34:58.022849   29429 notify.go:221] Checking for updates...
	I1217 00:34:58.022887   29429 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:34:58.024449   29429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:34:58.025969   29429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 00:34:58.027336   29429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 00:34:58.028756   29429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 00:34:58.030134   29429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:34:58.031812   29429 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1217 00:34:58.032278   29429 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:34:58.062597   29429 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1217 00:34:58.063991   29429 start.go:309] selected driver: kvm2
	I1217 00:34:58.064002   29429 start.go:927] validating driver "kvm2" against &{Name:functional-698418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-698418 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.109 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:34:58.064121   29429 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:34:58.066098   29429 out.go:203] 
	W1217 00:34:58.067330   29429 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:34:58.068510   29429 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh -n functional-698418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cp functional-698418:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2742981434/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh -n functional-698418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh -n functional-698418 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/17074/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /etc/test/nested/copy/17074/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/17074.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /etc/ssl/certs/17074.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/17074.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /usr/share/ca-certificates/17074.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/170742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /etc/ssl/certs/170742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/170742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /usr/share/ca-certificates/170742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-698418 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "sudo systemctl is-active docker": exit status 1 (196.085185ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "sudo systemctl is-active containerd": exit status 1 (190.359976ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698418 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-698418
localhost/kicbase/echo-server:functional-698418
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698418 image ls --format short --alsologtostderr:
I1217 00:38:48.667301   30420 out.go:360] Setting OutFile to fd 1 ...
I1217 00:38:48.667392   30420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:48.667396   30420 out.go:374] Setting ErrFile to fd 2...
I1217 00:38:48.667400   30420 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:48.667602   30420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:38:48.668147   30420 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:48.668240   30420 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:48.670290   30420 ssh_runner.go:195] Run: systemctl --version
I1217 00:38:48.672262   30420 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:48.672620   30420 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:38:48.672643   30420 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:48.672798   30420 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:38:48.753462   30420 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698418 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-698418  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-698418  │ fdeb3cf48ad96 │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ localhost/my-image                      │ functional-698418  │ 5ac45948df084 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698418 image ls --format table --alsologtostderr:
I1217 00:38:52.334824   30601 out.go:360] Setting OutFile to fd 1 ...
I1217 00:38:52.335070   30601 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:52.335079   30601 out.go:374] Setting ErrFile to fd 2...
I1217 00:38:52.335084   30601 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:52.335274   30601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:38:52.335784   30601 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:52.335881   30601 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:52.338055   30601 ssh_runner.go:195] Run: systemctl --version
I1217 00:38:52.340665   30601 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:52.341105   30601 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:38:52.341136   30601 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:52.341300   30601 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:38:52.423658   30601 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698418 image ls --format json --alsologtostderr:
[{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["regi
stry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"fdeb3cf48ad9679b364e8deba7195e98ea961267e4f7508433f750d27c318e4e","repoDigests":["localhost/minikube-local-cache-test@sha256:d3f1d938016158687e2c0260ee06ac6af7ec1816fdb1b317a8cae035c182486c"],"repoTags":["localhost/minikube-local-cache-test:functional-698418"],"size":"3328"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-api
server:v1.35.0-beta.0"],"size":"90819569"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":
["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-698418"],"size":"4943877"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec
23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ba1b44a47af7d8d5a62b9806c7ff71f323852f5867d838d60701bafa1b9f3f8b","repoDigests":["docker.io/librar
y/488fb1eea67cfb995ae1c6b36882a607fd78a72bca7b1f22635a440a758e5d72-tmp@sha256:ee21ba8e6087ed77b8a6efc53639a8cb508f1566be8d505c2882525312eb099b"],"repoTags":[],"size":"1466018"},{"id":"5ac45948df084264d33cc3a4ee28f018bb0b0e1d76a6665922faff7873a57df5","repoDigests":["localhost/my-image@sha256:a08dcc951aeedc1bf91311063d922169690dbe336745aae7df0b75d521c29fbd"],"repoTags":["localhost/my-image:functional-698418"],"size":"1468599"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698418 image ls --format json --alsologtostderr:
I1217 00:38:52.256014   30591 out.go:360] Setting OutFile to fd 1 ...
I1217 00:38:52.256145   30591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:52.256152   30591 out.go:374] Setting ErrFile to fd 2...
I1217 00:38:52.256158   30591 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:52.256376   30591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:38:52.256915   30591 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:52.257067   30591 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:52.259695   30591 ssh_runner.go:195] Run: systemctl --version
I1217 00:38:52.262862   30591 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:52.263495   30591 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:38:52.263543   30591 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:52.263785   30591 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:38:52.354386   30591 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698418 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-698418
size: "4943877"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: fdeb3cf48ad9679b364e8deba7195e98ea961267e4f7508433f750d27c318e4e
repoDigests:
- localhost/minikube-local-cache-test@sha256:d3f1d938016158687e2c0260ee06ac6af7ec1816fdb1b317a8cae035c182486c
repoTags:
- localhost/minikube-local-cache-test:functional-698418
size: "3328"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698418 image ls --format yaml --alsologtostderr:
I1217 00:38:48.848602   30431 out.go:360] Setting OutFile to fd 1 ...
I1217 00:38:48.848860   30431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:48.848868   30431 out.go:374] Setting ErrFile to fd 2...
I1217 00:38:48.848872   30431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:48.849067   30431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:38:48.849616   30431 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:48.849709   30431 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:48.851817   30431 ssh_runner.go:195] Run: systemctl --version
I1217 00:38:48.854078   30431 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:48.854582   30431 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:38:48.854615   30431 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:48.854793   30431 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:38:48.936251   30431 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh pgrep buildkitd: exit status 1 (156.042839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image build -t localhost/my-image:functional-698418 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 image build -t localhost/my-image:functional-698418 testdata/build --alsologtostderr: (2.8574963s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-698418 image build -t localhost/my-image:functional-698418 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ba1b44a47af
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-698418
--> 5ac45948df0
Successfully tagged localhost/my-image:functional-698418
5ac45948df084264d33cc3a4ee28f018bb0b0e1d76a6665922faff7873a57df5
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-698418 image build -t localhost/my-image:functional-698418 testdata/build --alsologtostderr:
I1217 00:38:49.190346   30464 out.go:360] Setting OutFile to fd 1 ...
I1217 00:38:49.190492   30464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:49.190526   30464 out.go:374] Setting ErrFile to fd 2...
I1217 00:38:49.190539   30464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:38:49.190833   30464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
I1217 00:38:49.191674   30464 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:49.192431   30464 config.go:182] Loaded profile config "functional-698418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1217 00:38:49.194578   30464 ssh_runner.go:195] Run: systemctl --version
I1217 00:38:49.196650   30464 main.go:143] libmachine: domain functional-698418 has defined MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:49.197032   30464 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:29:90:fc", ip: ""} in network mk-functional-698418: {Iface:virbr1 ExpiryTime:2025-12-17 01:18:18 +0000 UTC Type:0 Mac:52:54:00:29:90:fc Iaid: IPaddr:192.168.39.109 Prefix:24 Hostname:functional-698418 Clientid:01:52:54:00:29:90:fc}
I1217 00:38:49.197066   30464 main.go:143] libmachine: domain functional-698418 has defined IP address 192.168.39.109 and MAC address 52:54:00:29:90:fc in network mk-functional-698418
I1217 00:38:49.197245   30464 sshutil.go:53] new ssh client: &{IP:192.168.39.109 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/functional-698418/id_rsa Username:docker}
I1217 00:38:49.280253   30464 build_images.go:162] Building image from path: /tmp/build.1693322192.tar
I1217 00:38:49.280317   30464 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:38:49.296385   30464 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1693322192.tar
I1217 00:38:49.301963   30464 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1693322192.tar: stat -c "%s %y" /var/lib/minikube/build/build.1693322192.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1693322192.tar': No such file or directory
I1217 00:38:49.301996   30464 ssh_runner.go:362] scp /tmp/build.1693322192.tar --> /var/lib/minikube/build/build.1693322192.tar (3072 bytes)
I1217 00:38:49.344803   30464 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1693322192
I1217 00:38:49.361741   30464 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1693322192 -xf /var/lib/minikube/build/build.1693322192.tar
I1217 00:38:49.378381   30464 crio.go:315] Building image: /var/lib/minikube/build/build.1693322192
I1217 00:38:49.378456   30464 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-698418 /var/lib/minikube/build/build.1693322192 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 00:38:51.949780   30464 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-698418 /var/lib/minikube/build/build.1693322192 --cgroup-manager=cgroupfs: (2.571292281s)
I1217 00:38:51.949879   30464 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1693322192
I1217 00:38:51.968074   30464 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1693322192.tar
I1217 00:38:51.981550   30464 build_images.go:218] Built localhost/my-image:functional-698418 from /tmp/build.1693322192.tar
I1217 00:38:51.981588   30464 build_images.go:134] succeeded building to: functional-698418
I1217 00:38:51.981593   30464 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-698418
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image load --daemon kicbase/echo-server:functional-698418 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 image load --daemon kicbase/echo-server:functional-698418 --alsologtostderr: (1.212442376s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image load --daemon kicbase/echo-server:functional-698418 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "250.651094ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.950553ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "245.544033ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "65.470289ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-698418
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image load --daemon kicbase/echo-server:functional-698418 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image save kicbase/echo-server:functional-698418 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image rm kicbase/echo-server:functional-698418 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-698418
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 image save --daemon kicbase/echo-server:functional-698418 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-698418
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2996149672/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.83285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:32:53.839202   17074 retry.go:31] will retry after 254.552246ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2996149672/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "sudo umount -f /mount-9p": exit status 1 (153.911388ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-698418 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2996149672/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T" /mount1: exit status 1 (165.347678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 00:32:54.938429   17074 retry.go:31] will retry after 461.825052ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-698418 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-698418 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2837782444/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 service list: (1.221360431s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-698418 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-698418 service list -o json: (1.21965673s)
functional_test.go:1504: Took "1.219744584s" to run "out/minikube-linux-amd64 -p functional-698418 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-698418
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-698418
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-698418
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (187.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 00:43:44.661159   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.472095   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.478569   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.490093   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.511557   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.553052   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.634580   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:45.796243   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:46.118063   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:46.760168   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:48.041833   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:50.604266   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:43:55.725650   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:44:05.967193   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:44:26.449159   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:07.411271   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m6.75242875s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (187.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 kubectl -- rollout status deployment/busybox: (3.874235479s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-cmzfq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-prhrt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-tsm4h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-cmzfq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-prhrt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-tsm4h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-cmzfq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-prhrt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-tsm4h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-cmzfq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-cmzfq -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-prhrt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-prhrt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-tsm4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 kubectl -- exec busybox-7b57f96db7-tsm4h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node add --alsologtostderr -v 5
E1217 00:46:29.333210   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:46:47.730368   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 node add --alsologtostderr -v 5: (44.950885509s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-172576 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp testdata/cp-test.txt ha-172576:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579517718/001/cp-test_ha-172576.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576:/home/docker/cp-test.txt ha-172576-m02:/home/docker/cp-test_ha-172576_ha-172576-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test_ha-172576_ha-172576-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576:/home/docker/cp-test.txt ha-172576-m03:/home/docker/cp-test_ha-172576_ha-172576-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test_ha-172576_ha-172576-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576:/home/docker/cp-test.txt ha-172576-m04:/home/docker/cp-test_ha-172576_ha-172576-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test_ha-172576_ha-172576-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp testdata/cp-test.txt ha-172576-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579517718/001/cp-test_ha-172576-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m02:/home/docker/cp-test.txt ha-172576:/home/docker/cp-test_ha-172576-m02_ha-172576.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test_ha-172576-m02_ha-172576.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m02:/home/docker/cp-test.txt ha-172576-m03:/home/docker/cp-test_ha-172576-m02_ha-172576-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test_ha-172576-m02_ha-172576-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m02:/home/docker/cp-test.txt ha-172576-m04:/home/docker/cp-test_ha-172576-m02_ha-172576-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test_ha-172576-m02_ha-172576-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp testdata/cp-test.txt ha-172576-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579517718/001/cp-test_ha-172576-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m03:/home/docker/cp-test.txt ha-172576:/home/docker/cp-test_ha-172576-m03_ha-172576.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test_ha-172576-m03_ha-172576.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m03:/home/docker/cp-test.txt ha-172576-m02:/home/docker/cp-test_ha-172576-m03_ha-172576-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test_ha-172576-m03_ha-172576-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m03:/home/docker/cp-test.txt ha-172576-m04:/home/docker/cp-test_ha-172576-m03_ha-172576-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test_ha-172576-m03_ha-172576-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp testdata/cp-test.txt ha-172576-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2579517718/001/cp-test_ha-172576-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m04:/home/docker/cp-test.txt ha-172576:/home/docker/cp-test_ha-172576-m04_ha-172576.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576 "sudo cat /home/docker/cp-test_ha-172576-m04_ha-172576.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m04:/home/docker/cp-test.txt ha-172576-m02:/home/docker/cp-test_ha-172576-m04_ha-172576-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m02 "sudo cat /home/docker/cp-test_ha-172576-m04_ha-172576-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 cp ha-172576-m04:/home/docker/cp-test.txt ha-172576-m03:/home/docker/cp-test_ha-172576-m04_ha-172576-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 ssh -n ha-172576-m03 "sudo cat /home/docker/cp-test_ha-172576-m04_ha-172576-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (80.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node stop m02 --alsologtostderr -v 5
E1217 00:47:12.864072   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 node stop m02 --alsologtostderr -v 5: (1m20.214822202s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5: exit status 7 (534.234638ms)

                                                
                                                
-- stdout --
	ha-172576
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-172576-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-172576-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-172576-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:48:32.636394   34494 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:48:32.636496   34494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:48:32.636504   34494 out.go:374] Setting ErrFile to fd 2...
	I1217 00:48:32.636508   34494 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:48:32.636685   34494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:48:32.636835   34494 out.go:368] Setting JSON to false
	I1217 00:48:32.636856   34494 mustload.go:66] Loading cluster: ha-172576
	I1217 00:48:32.636982   34494 notify.go:221] Checking for updates...
	I1217 00:48:32.637255   34494 config.go:182] Loaded profile config "ha-172576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:48:32.637274   34494 status.go:174] checking status of ha-172576 ...
	I1217 00:48:32.640064   34494 status.go:371] ha-172576 host status = "Running" (err=<nil>)
	I1217 00:48:32.640083   34494 host.go:66] Checking if "ha-172576" exists ...
	I1217 00:48:32.642769   34494 main.go:143] libmachine: domain ha-172576 has defined MAC address 52:54:00:65:78:22 in network mk-ha-172576
	I1217 00:48:32.643241   34494 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:78:22", ip: ""} in network mk-ha-172576: {Iface:virbr1 ExpiryTime:2025-12-17 01:43:15 +0000 UTC Type:0 Mac:52:54:00:65:78:22 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-172576 Clientid:01:52:54:00:65:78:22}
	I1217 00:48:32.643274   34494 main.go:143] libmachine: domain ha-172576 has defined IP address 192.168.39.11 and MAC address 52:54:00:65:78:22 in network mk-ha-172576
	I1217 00:48:32.643450   34494 host.go:66] Checking if "ha-172576" exists ...
	I1217 00:48:32.643671   34494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:48:32.646430   34494 main.go:143] libmachine: domain ha-172576 has defined MAC address 52:54:00:65:78:22 in network mk-ha-172576
	I1217 00:48:32.646900   34494 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:78:22", ip: ""} in network mk-ha-172576: {Iface:virbr1 ExpiryTime:2025-12-17 01:43:15 +0000 UTC Type:0 Mac:52:54:00:65:78:22 Iaid: IPaddr:192.168.39.11 Prefix:24 Hostname:ha-172576 Clientid:01:52:54:00:65:78:22}
	I1217 00:48:32.646936   34494 main.go:143] libmachine: domain ha-172576 has defined IP address 192.168.39.11 and MAC address 52:54:00:65:78:22 in network mk-ha-172576
	I1217 00:48:32.647171   34494 sshutil.go:53] new ssh client: &{IP:192.168.39.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/ha-172576/id_rsa Username:docker}
	I1217 00:48:32.739865   34494 ssh_runner.go:195] Run: systemctl --version
	I1217 00:48:32.750047   34494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:48:32.771822   34494 kubeconfig.go:125] found "ha-172576" server: "https://192.168.39.254:8443"
	I1217 00:48:32.771871   34494 api_server.go:166] Checking apiserver status ...
	I1217 00:48:32.771919   34494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:48:32.794060   34494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	W1217 00:48:32.812107   34494 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:48:32.812177   34494 ssh_runner.go:195] Run: ls
	I1217 00:48:32.818664   34494 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 00:48:32.824771   34494 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 00:48:32.824797   34494 status.go:463] ha-172576 apiserver status = Running (err=<nil>)
	I1217 00:48:32.824808   34494 status.go:176] ha-172576 status: &{Name:ha-172576 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:48:32.824826   34494 status.go:174] checking status of ha-172576-m02 ...
	I1217 00:48:32.826670   34494 status.go:371] ha-172576-m02 host status = "Stopped" (err=<nil>)
	I1217 00:48:32.826701   34494 status.go:384] host is not running, skipping remaining checks
	I1217 00:48:32.826708   34494 status.go:176] ha-172576-m02 status: &{Name:ha-172576-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:48:32.826739   34494 status.go:174] checking status of ha-172576-m03 ...
	I1217 00:48:32.828069   34494 status.go:371] ha-172576-m03 host status = "Running" (err=<nil>)
	I1217 00:48:32.828086   34494 host.go:66] Checking if "ha-172576-m03" exists ...
	I1217 00:48:32.830422   34494 main.go:143] libmachine: domain ha-172576-m03 has defined MAC address 52:54:00:37:e9:ec in network mk-ha-172576
	I1217 00:48:32.830855   34494 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:e9:ec", ip: ""} in network mk-ha-172576: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:03 +0000 UTC Type:0 Mac:52:54:00:37:e9:ec Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-172576-m03 Clientid:01:52:54:00:37:e9:ec}
	I1217 00:48:32.830882   34494 main.go:143] libmachine: domain ha-172576-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:37:e9:ec in network mk-ha-172576
	I1217 00:48:32.831047   34494 host.go:66] Checking if "ha-172576-m03" exists ...
	I1217 00:48:32.831287   34494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:48:32.833527   34494 main.go:143] libmachine: domain ha-172576-m03 has defined MAC address 52:54:00:37:e9:ec in network mk-ha-172576
	I1217 00:48:32.833993   34494 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:e9:ec", ip: ""} in network mk-ha-172576: {Iface:virbr1 ExpiryTime:2025-12-17 01:45:03 +0000 UTC Type:0 Mac:52:54:00:37:e9:ec Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:ha-172576-m03 Clientid:01:52:54:00:37:e9:ec}
	I1217 00:48:32.834050   34494 main.go:143] libmachine: domain ha-172576-m03 has defined IP address 192.168.39.194 and MAC address 52:54:00:37:e9:ec in network mk-ha-172576
	I1217 00:48:32.834223   34494 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/ha-172576-m03/id_rsa Username:docker}
	I1217 00:48:32.925172   34494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:48:32.944995   34494 kubeconfig.go:125] found "ha-172576" server: "https://192.168.39.254:8443"
	I1217 00:48:32.945034   34494 api_server.go:166] Checking apiserver status ...
	I1217 00:48:32.945067   34494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:48:32.967979   34494 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1837/cgroup
	W1217 00:48:32.979949   34494 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1837/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:48:32.980031   34494 ssh_runner.go:195] Run: ls
	I1217 00:48:32.985500   34494 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1217 00:48:32.990778   34494 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1217 00:48:32.990803   34494 status.go:463] ha-172576-m03 apiserver status = Running (err=<nil>)
	I1217 00:48:32.990811   34494 status.go:176] ha-172576-m03 status: &{Name:ha-172576-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:48:32.990825   34494 status.go:174] checking status of ha-172576-m04 ...
	I1217 00:48:32.992573   34494 status.go:371] ha-172576-m04 host status = "Running" (err=<nil>)
	I1217 00:48:32.992595   34494 host.go:66] Checking if "ha-172576-m04" exists ...
	I1217 00:48:32.995389   34494 main.go:143] libmachine: domain ha-172576-m04 has defined MAC address 52:54:00:de:8c:ff in network mk-ha-172576
	I1217 00:48:32.995814   34494 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:8c:ff", ip: ""} in network mk-ha-172576: {Iface:virbr1 ExpiryTime:2025-12-17 01:46:30 +0000 UTC Type:0 Mac:52:54:00:de:8c:ff Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-172576-m04 Clientid:01:52:54:00:de:8c:ff}
	I1217 00:48:32.995837   34494 main.go:143] libmachine: domain ha-172576-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:de:8c:ff in network mk-ha-172576
	I1217 00:48:32.995994   34494 host.go:66] Checking if "ha-172576-m04" exists ...
	I1217 00:48:32.996251   34494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:48:32.999726   34494 main.go:143] libmachine: domain ha-172576-m04 has defined MAC address 52:54:00:de:8c:ff in network mk-ha-172576
	I1217 00:48:33.000379   34494 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:de:8c:ff", ip: ""} in network mk-ha-172576: {Iface:virbr1 ExpiryTime:2025-12-17 01:46:30 +0000 UTC Type:0 Mac:52:54:00:de:8c:ff Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:ha-172576-m04 Clientid:01:52:54:00:de:8c:ff}
	I1217 00:48:33.000407   34494 main.go:143] libmachine: domain ha-172576-m04 has defined IP address 192.168.39.237 and MAC address 52:54:00:de:8c:ff in network mk-ha-172576
	I1217 00:48:33.000634   34494 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/ha-172576-m04/id_rsa Username:docker}
	I1217 00:48:33.092423   34494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:48:33.112191   34494 status.go:176] ha-172576-m04 status: &{Name:ha-172576-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (80.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node start m02 --alsologtostderr -v 5
E1217 00:48:44.661228   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:48:45.471519   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 node start m02 --alsologtostderr -v 5: (37.372553785s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (386.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 stop --alsologtostderr -v 5
E1217 00:49:13.174590   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:52:12.863945   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 stop --alsologtostderr -v 5: (4m28.691332337s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 start --wait true --alsologtostderr -v 5
E1217 00:53:44.661127   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:45.471406   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:55:15.936382   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 start --wait true --alsologtostderr -v 5: (1m57.193487096s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (386.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 node delete m03 --alsologtostderr -v 5: (18.540385376s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (237.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 stop --alsologtostderr -v 5
E1217 00:57:12.863607   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:58:44.661334   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:58:45.471628   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 stop --alsologtostderr -v 5: (3m57.138874693s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5: exit status 7 (62.002303ms)

                                                
                                                
-- stdout --
	ha-172576
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-172576-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-172576-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:59:55.614715   37684 out.go:360] Setting OutFile to fd 1 ...
	I1217 00:59:55.614946   37684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:59:55.614953   37684 out.go:374] Setting ErrFile to fd 2...
	I1217 00:59:55.614958   37684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:59:55.615157   37684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 00:59:55.615336   37684 out.go:368] Setting JSON to false
	I1217 00:59:55.615357   37684 mustload.go:66] Loading cluster: ha-172576
	I1217 00:59:55.615486   37684 notify.go:221] Checking for updates...
	I1217 00:59:55.615694   37684 config.go:182] Loaded profile config "ha-172576": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 00:59:55.615708   37684 status.go:174] checking status of ha-172576 ...
	I1217 00:59:55.617697   37684 status.go:371] ha-172576 host status = "Stopped" (err=<nil>)
	I1217 00:59:55.617718   37684 status.go:384] host is not running, skipping remaining checks
	I1217 00:59:55.617725   37684 status.go:176] ha-172576 status: &{Name:ha-172576 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:59:55.617746   37684 status.go:174] checking status of ha-172576-m02 ...
	I1217 00:59:55.618906   37684 status.go:371] ha-172576-m02 host status = "Stopped" (err=<nil>)
	I1217 00:59:55.618918   37684 status.go:384] host is not running, skipping remaining checks
	I1217 00:59:55.618922   37684 status.go:176] ha-172576-m02 status: &{Name:ha-172576-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 00:59:55.618931   37684 status.go:174] checking status of ha-172576-m04 ...
	I1217 00:59:55.619954   37684 status.go:371] ha-172576-m04 host status = "Stopped" (err=<nil>)
	I1217 00:59:55.619964   37684 status.go:384] host is not running, skipping remaining checks
	I1217 00:59:55.619968   37684 status.go:176] ha-172576-m04 status: &{Name:ha-172576-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (237.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1217 01:00:08.537266   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m30.630998265s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 node add --control-plane --alsologtostderr -v 5
E1217 01:02:12.869974   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-172576 node add --control-plane --alsologtostderr -v 5: (1m12.101588162s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-172576 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-593657 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1217 01:03:27.732981   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:03:44.661620   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:03:45.471793   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-593657 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.038018039s)
--- PASS: TestJSONOutput/start/Command (83.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-593657 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-593657 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-593657 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-593657 --output=json --user=testUser: (8.223055172s)
--- PASS: TestJSONOutput/stop/Command (8.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-828129 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-828129 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (89.832851ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3b1b55a4-13d3-48a7-a97d-42c3caae6e9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-828129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f622eb24-7757-406b-bad5-01c16cdb4ed2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"931ee8ed-21a8-41e3-bb2c-819feb23834e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5d5339d9-2471-47cd-a563-d514121c6e5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig"}}
	{"specversion":"1.0","id":"8f423e21-e059-4114-88b1-7b6a5aeb1363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube"}}
	{"specversion":"1.0","id":"0e704371-76ef-48d7-8674-06926e7d47e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"63bf489c-28f4-484d-af6e-cae885e0485c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d2282abb-d6d2-4a8b-bbc7-1527f28503ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-828129" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-828129
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (76.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-730334 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-730334 --driver=kvm2  --container-runtime=crio: (36.813063145s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-733092 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-733092 --driver=kvm2  --container-runtime=crio: (36.709460165s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-730334
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-733092
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-733092" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-733092
helpers_test.go:176: Cleaning up "first-730334" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-730334
--- PASS: TestMinikubeProfile (76.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-518932 --memory=3072 --mount-string /tmp/TestMountStartserial1629401564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-518932 --memory=3072 --mount-string /tmp/TestMountStartserial1629401564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.932169253s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-518932 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-518932 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-534425 --memory=3072 --mount-string /tmp/TestMountStartserial1629401564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-534425 --memory=3072 --mount-string /tmp/TestMountStartserial1629401564/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.418039643s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-534425 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-534425 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-518932 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-534425 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-534425 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-534425
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-534425: (1.349745476s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (17.94s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-534425
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-534425: (16.942304655s)
--- PASS: TestMountStart/serial/RestartStopped (17.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-534425 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-534425 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-412026 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 01:07:12.864153   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-412026 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.220210191s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-412026 -- rollout status deployment/busybox: (3.707002749s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-kmpbn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-rtrth -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-kmpbn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-rtrth -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-kmpbn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-rtrth -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-kmpbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-kmpbn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-rtrth -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-412026 -- exec busybox-7b57f96db7-rtrth -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-412026 -v=5 --alsologtostderr
E1217 01:08:44.661501   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:08:45.471774   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-412026 -v=5 --alsologtostderr: (41.898897648s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-412026 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp testdata/cp-test.txt multinode-412026:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2117222907/001/cp-test_multinode-412026.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026:/home/docker/cp-test.txt multinode-412026-m02:/home/docker/cp-test_multinode-412026_multinode-412026-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m02 "sudo cat /home/docker/cp-test_multinode-412026_multinode-412026-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026:/home/docker/cp-test.txt multinode-412026-m03:/home/docker/cp-test_multinode-412026_multinode-412026-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m03 "sudo cat /home/docker/cp-test_multinode-412026_multinode-412026-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp testdata/cp-test.txt multinode-412026-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2117222907/001/cp-test_multinode-412026-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026-m02:/home/docker/cp-test.txt multinode-412026:/home/docker/cp-test_multinode-412026-m02_multinode-412026.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026 "sudo cat /home/docker/cp-test_multinode-412026-m02_multinode-412026.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026-m02:/home/docker/cp-test.txt multinode-412026-m03:/home/docker/cp-test_multinode-412026-m02_multinode-412026-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m03 "sudo cat /home/docker/cp-test_multinode-412026-m02_multinode-412026-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp testdata/cp-test.txt multinode-412026-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2117222907/001/cp-test_multinode-412026-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026-m03:/home/docker/cp-test.txt multinode-412026:/home/docker/cp-test_multinode-412026-m03_multinode-412026.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026 "sudo cat /home/docker/cp-test_multinode-412026-m03_multinode-412026.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 cp multinode-412026-m03:/home/docker/cp-test.txt multinode-412026-m02:/home/docker/cp-test_multinode-412026-m03_multinode-412026-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 ssh -n multinode-412026-m02 "sudo cat /home/docker/cp-test_multinode-412026-m03_multinode-412026-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.02s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-412026 node stop m03: (1.632881309s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-412026 status: exit status 7 (334.122516ms)

                                                
                                                
-- stdout --
	multinode-412026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-412026-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-412026-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr: exit status 7 (325.022129ms)

                                                
                                                
-- stdout --
	multinode-412026
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-412026-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-412026-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:09:08.874495   43200 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:09:08.874739   43200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:09:08.874748   43200 out.go:374] Setting ErrFile to fd 2...
	I1217 01:09:08.874752   43200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:09:08.874967   43200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:09:08.875176   43200 out.go:368] Setting JSON to false
	I1217 01:09:08.875208   43200 mustload.go:66] Loading cluster: multinode-412026
	I1217 01:09:08.875305   43200 notify.go:221] Checking for updates...
	I1217 01:09:08.875635   43200 config.go:182] Loaded profile config "multinode-412026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:09:08.875649   43200 status.go:174] checking status of multinode-412026 ...
	I1217 01:09:08.877604   43200 status.go:371] multinode-412026 host status = "Running" (err=<nil>)
	I1217 01:09:08.877644   43200 host.go:66] Checking if "multinode-412026" exists ...
	I1217 01:09:08.880560   43200 main.go:143] libmachine: domain multinode-412026 has defined MAC address 52:54:00:86:23:11 in network mk-multinode-412026
	I1217 01:09:08.881095   43200 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:23:11", ip: ""} in network mk-multinode-412026: {Iface:virbr1 ExpiryTime:2025-12-17 02:06:51 +0000 UTC Type:0 Mac:52:54:00:86:23:11 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-412026 Clientid:01:52:54:00:86:23:11}
	I1217 01:09:08.881158   43200 main.go:143] libmachine: domain multinode-412026 has defined IP address 192.168.39.20 and MAC address 52:54:00:86:23:11 in network mk-multinode-412026
	I1217 01:09:08.881292   43200 host.go:66] Checking if "multinode-412026" exists ...
	I1217 01:09:08.881508   43200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:09:08.883809   43200 main.go:143] libmachine: domain multinode-412026 has defined MAC address 52:54:00:86:23:11 in network mk-multinode-412026
	I1217 01:09:08.884245   43200 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:86:23:11", ip: ""} in network mk-multinode-412026: {Iface:virbr1 ExpiryTime:2025-12-17 02:06:51 +0000 UTC Type:0 Mac:52:54:00:86:23:11 Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-412026 Clientid:01:52:54:00:86:23:11}
	I1217 01:09:08.884271   43200 main.go:143] libmachine: domain multinode-412026 has defined IP address 192.168.39.20 and MAC address 52:54:00:86:23:11 in network mk-multinode-412026
	I1217 01:09:08.884404   43200 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/multinode-412026/id_rsa Username:docker}
	I1217 01:09:08.964303   43200 ssh_runner.go:195] Run: systemctl --version
	I1217 01:09:08.970820   43200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:09:08.988009   43200 kubeconfig.go:125] found "multinode-412026" server: "https://192.168.39.20:8443"
	I1217 01:09:08.988068   43200 api_server.go:166] Checking apiserver status ...
	I1217 01:09:08.988118   43200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:09:09.007581   43200 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1351/cgroup
	W1217 01:09:09.020186   43200 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1351/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:09:09.020248   43200 ssh_runner.go:195] Run: ls
	I1217 01:09:09.025910   43200 api_server.go:253] Checking apiserver healthz at https://192.168.39.20:8443/healthz ...
	I1217 01:09:09.030879   43200 api_server.go:279] https://192.168.39.20:8443/healthz returned 200:
	ok
	I1217 01:09:09.030920   43200 status.go:463] multinode-412026 apiserver status = Running (err=<nil>)
	I1217 01:09:09.030932   43200 status.go:176] multinode-412026 status: &{Name:multinode-412026 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:09:09.030958   43200 status.go:174] checking status of multinode-412026-m02 ...
	I1217 01:09:09.032862   43200 status.go:371] multinode-412026-m02 host status = "Running" (err=<nil>)
	I1217 01:09:09.032881   43200 host.go:66] Checking if "multinode-412026-m02" exists ...
	I1217 01:09:09.035349   43200 main.go:143] libmachine: domain multinode-412026-m02 has defined MAC address 52:54:00:cc:f7:2b in network mk-multinode-412026
	I1217 01:09:09.035719   43200 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:f7:2b", ip: ""} in network mk-multinode-412026: {Iface:virbr1 ExpiryTime:2025-12-17 02:07:43 +0000 UTC Type:0 Mac:52:54:00:cc:f7:2b Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-412026-m02 Clientid:01:52:54:00:cc:f7:2b}
	I1217 01:09:09.035742   43200 main.go:143] libmachine: domain multinode-412026-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:cc:f7:2b in network mk-multinode-412026
	I1217 01:09:09.035870   43200 host.go:66] Checking if "multinode-412026-m02" exists ...
	I1217 01:09:09.036177   43200 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:09:09.038620   43200 main.go:143] libmachine: domain multinode-412026-m02 has defined MAC address 52:54:00:cc:f7:2b in network mk-multinode-412026
	I1217 01:09:09.039049   43200 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cc:f7:2b", ip: ""} in network mk-multinode-412026: {Iface:virbr1 ExpiryTime:2025-12-17 02:07:43 +0000 UTC Type:0 Mac:52:54:00:cc:f7:2b Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-412026-m02 Clientid:01:52:54:00:cc:f7:2b}
	I1217 01:09:09.039072   43200 main.go:143] libmachine: domain multinode-412026-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:cc:f7:2b in network mk-multinode-412026
	I1217 01:09:09.039206   43200 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22168-12839/.minikube/machines/multinode-412026-m02/id_rsa Username:docker}
	I1217 01:09:09.124381   43200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:09:09.140555   43200 status.go:176] multinode-412026-m02 status: &{Name:multinode-412026-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:09:09.140600   43200 status.go:174] checking status of multinode-412026-m03 ...
	I1217 01:09:09.142159   43200 status.go:371] multinode-412026-m03 host status = "Stopped" (err=<nil>)
	I1217 01:09:09.142175   43200 status.go:384] host is not running, skipping remaining checks
	I1217 01:09:09.142180   43200 status.go:176] multinode-412026-m03 status: &{Name:multinode-412026-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-412026 node start m03 -v=5 --alsologtostderr: (38.198743564s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (301.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-412026
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-412026
E1217 01:11:55.938276   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:12:12.870186   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-412026: (2m52.082769708s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-412026 --wait=true -v=5 --alsologtostderr
E1217 01:13:44.661303   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:13:45.471474   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-412026 --wait=true -v=5 --alsologtostderr: (2m8.929217365s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-412026
--- PASS: TestMultiNode/serial/RestartKeepsNodes (301.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-412026 node delete m03: (2.142805615s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (163.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 stop
E1217 01:16:48.540957   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:17:12.869828   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-412026 stop: (2m43.764722468s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-412026 status: exit status 7 (66.493635ms)

                                                
                                                
-- stdout --
	multinode-412026
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-412026-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr: exit status 7 (60.79378ms)

                                                
                                                
-- stdout --
	multinode-412026
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-412026-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:17:35.477395   45538 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:17:35.477620   45538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:35.477630   45538 out.go:374] Setting ErrFile to fd 2...
	I1217 01:17:35.477633   45538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:17:35.477828   45538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:17:35.478005   45538 out.go:368] Setting JSON to false
	I1217 01:17:35.478047   45538 mustload.go:66] Loading cluster: multinode-412026
	I1217 01:17:35.478188   45538 notify.go:221] Checking for updates...
	I1217 01:17:35.478381   45538 config.go:182] Loaded profile config "multinode-412026": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:17:35.478396   45538 status.go:174] checking status of multinode-412026 ...
	I1217 01:17:35.480782   45538 status.go:371] multinode-412026 host status = "Stopped" (err=<nil>)
	I1217 01:17:35.480803   45538 status.go:384] host is not running, skipping remaining checks
	I1217 01:17:35.480810   45538 status.go:176] multinode-412026 status: &{Name:multinode-412026 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:17:35.480833   45538 status.go:174] checking status of multinode-412026-m02 ...
	I1217 01:17:35.482288   45538 status.go:371] multinode-412026-m02 host status = "Stopped" (err=<nil>)
	I1217 01:17:35.482306   45538 status.go:384] host is not running, skipping remaining checks
	I1217 01:17:35.482313   45538 status.go:176] multinode-412026-m02 status: &{Name:multinode-412026-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (163.89s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (81.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-412026 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1217 01:18:44.660910   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:18:45.472177   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-412026 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m21.231728986s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-412026 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (81.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-412026
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-412026-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-412026-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (92.50757ms)

                                                
                                                
-- stdout --
	* [multinode-412026-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-412026-m02' is duplicated with machine name 'multinode-412026-m02' in profile 'multinode-412026'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-412026-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-412026-m03 --driver=kvm2  --container-runtime=crio: (39.031940568s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-412026
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-412026: exit status 80 (210.839336ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-412026 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-412026-m03 already exists in multinode-412026-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-412026-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.23s)

                                                
                                    
x
+
TestScheduledStopUnix (110.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-251339 --memory=3072 --driver=kvm2  --container-runtime=crio
E1217 01:22:12.869780   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-251339 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.767418875s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-251339 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:22:39.579367   48196 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:22:39.579486   48196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:39.579492   48196 out.go:374] Setting ErrFile to fd 2...
	I1217 01:22:39.579498   48196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:39.579716   48196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:22:39.579965   48196 out.go:368] Setting JSON to false
	I1217 01:22:39.580095   48196 mustload.go:66] Loading cluster: scheduled-stop-251339
	I1217 01:22:39.580433   48196 config.go:182] Loaded profile config "scheduled-stop-251339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:22:39.580517   48196 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/config.json ...
	I1217 01:22:39.580715   48196 mustload.go:66] Loading cluster: scheduled-stop-251339
	I1217 01:22:39.580865   48196 config.go:182] Loaded profile config "scheduled-stop-251339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-251339 -n scheduled-stop-251339
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-251339 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:22:39.854855   48240 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:22:39.855082   48240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:39.855089   48240 out.go:374] Setting ErrFile to fd 2...
	I1217 01:22:39.855094   48240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:22:39.855247   48240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:22:39.855465   48240 out.go:368] Setting JSON to false
	I1217 01:22:39.855673   48240 daemonize_unix.go:73] killing process 48229 as it is an old scheduled stop
	I1217 01:22:39.855772   48240 mustload.go:66] Loading cluster: scheduled-stop-251339
	I1217 01:22:39.856173   48240 config.go:182] Loaded profile config "scheduled-stop-251339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:22:39.856261   48240 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/config.json ...
	I1217 01:22:39.856431   48240 mustload.go:66] Loading cluster: scheduled-stop-251339
	I1217 01:22:39.856529   48240 config.go:182] Loaded profile config "scheduled-stop-251339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 01:22:39.859964   17074 retry.go:31] will retry after 142.763µs: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.861148   17074 retry.go:31] will retry after 89.367µs: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.862272   17074 retry.go:31] will retry after 334.844µs: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.863395   17074 retry.go:31] will retry after 234.348µs: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.864520   17074 retry.go:31] will retry after 531.329µs: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.865648   17074 retry.go:31] will retry after 672.427µs: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.866781   17074 retry.go:31] will retry after 1.234107ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.868975   17074 retry.go:31] will retry after 1.943664ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.871174   17074 retry.go:31] will retry after 2.375487ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.874385   17074 retry.go:31] will retry after 4.700407ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.879577   17074 retry.go:31] will retry after 5.850057ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.885839   17074 retry.go:31] will retry after 11.912647ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.898760   17074 retry.go:31] will retry after 18.276604ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.918087   17074 retry.go:31] will retry after 23.20515ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
I1217 01:22:39.942369   17074 retry.go:31] will retry after 38.736098ms: open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-251339 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-251339 -n scheduled-stop-251339
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-251339
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-251339 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 01:23:05.559983   48398 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:23:05.560250   48398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:23:05.560260   48398 out.go:374] Setting ErrFile to fd 2...
	I1217 01:23:05.560265   48398 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:23:05.560431   48398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:23:05.560685   48398 out.go:368] Setting JSON to false
	I1217 01:23:05.560759   48398 mustload.go:66] Loading cluster: scheduled-stop-251339
	I1217 01:23:05.561073   48398 config.go:182] Loaded profile config "scheduled-stop-251339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:23:05.561134   48398 profile.go:143] Saving config to /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/scheduled-stop-251339/config.json ...
	I1217 01:23:05.561325   48398 mustload.go:66] Loading cluster: scheduled-stop-251339
	I1217 01:23:05.561428   48398 config.go:182] Loaded profile config "scheduled-stop-251339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 01:23:44.661712   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:23:45.471866   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-251339
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-251339: exit status 7 (62.570984ms)

                                                
                                                
-- stdout --
	scheduled-stop-251339
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-251339 -n scheduled-stop-251339
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-251339 -n scheduled-stop-251339: exit status 7 (59.882529ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-251339" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-251339
--- PASS: TestScheduledStopUnix (110.37s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (365.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4056823840 start -p running-upgrade-865234 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4056823840 start -p running-upgrade-865234 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (48.985277687s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-865234 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-865234 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (5m15.019652594s)
helpers_test.go:176: Cleaning up "running-upgrade-865234" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-865234
--- PASS: TestRunningBinaryUpgrade (365.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (158.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m9.360502943s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-271903
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-271903: (1.922828204s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-271903 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-271903 status --format={{.Host}}: exit status 7 (64.057326ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.195854209s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-271903 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (359.924961ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-271903] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-271903
	    minikube start -p kubernetes-upgrade-271903 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2719032 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-271903 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-271903 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.92995258s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-271903" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-271903
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-271903: (1.039623071s)
--- PASS: TestKubernetesUpgrade (158.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-349730 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-349730 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (96.501282ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-349730] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-349730 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-349730 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m40.348536567s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-349730 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-349730 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-349730 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (31.514745444s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-349730 status -o json
no_kubernetes_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-349730 status -o json: exit status 2 (209.604933ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-349730","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-349730
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-349730: (1.867403533s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:162: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-349730 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:162: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-349730 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (42.608440245s)
--- PASS: TestNoKubernetes/serial/Start (42.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22168-12839/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-349730 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-349730 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.127283ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:195: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:205: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:184: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-349730
no_kubernetes_test.go:184: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-349730: (1.461047752s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (37.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:217: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-349730 --driver=kvm2  --container-runtime=crio
E1217 01:27:12.863747   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:217: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-349730 --driver=kvm2  --container-runtime=crio: (37.155741381s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (37.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-349730 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-349730 "sudo systemctl is-active --quiet service kubelet": exit status 1 (171.869195ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (90.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.134189662 start -p stopped-upgrade-526226 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.134189662 start -p stopped-upgrade-526226 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (56.810273309s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.134189662 -p stopped-upgrade-526226 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.134189662 -p stopped-upgrade-526226 stop: (1.774375015s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-526226 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1217 01:28:35.939831   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:28:44.661100   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:28:45.471688   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-526226 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (32.156233245s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (90.74s)

                                                
                                    
x
+
TestPause/serial/Start (81.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-716229 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-716229 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m21.201452768s)
--- PASS: TestPause/serial/Start (81.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-428588 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-428588 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (1.060413681s)

                                                
                                                
-- stdout --
	* [false-428588] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:28:58.834729   53516 out.go:360] Setting OutFile to fd 1 ...
	I1217 01:28:58.834844   53516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:58.834856   53516 out.go:374] Setting ErrFile to fd 2...
	I1217 01:28:58.834862   53516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:28:58.835217   53516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22168-12839/.minikube/bin
	I1217 01:28:58.835857   53516 out.go:368] Setting JSON to false
	I1217 01:28:58.837108   53516 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7885,"bootTime":1765927054,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 01:28:58.837189   53516 start.go:143] virtualization: kvm guest
	I1217 01:28:58.839051   53516 out.go:179] * [false-428588] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 01:28:58.840766   53516 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:28:58.840773   53516 notify.go:221] Checking for updates...
	I1217 01:28:58.843343   53516 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:28:58.844703   53516 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22168-12839/kubeconfig
	I1217 01:28:58.845844   53516 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22168-12839/.minikube
	I1217 01:28:58.847054   53516 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 01:28:58.848258   53516 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:28:58.849923   53516 config.go:182] Loaded profile config "pause-716229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1217 01:28:58.850090   53516 config.go:182] Loaded profile config "running-upgrade-865234": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 01:28:58.850203   53516 config.go:182] Loaded profile config "stopped-upgrade-526226": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 01:28:58.850386   53516 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:28:59.820775   53516 out.go:179] * Using the kvm2 driver based on user configuration
	I1217 01:28:59.822075   53516 start.go:309] selected driver: kvm2
	I1217 01:28:59.822092   53516 start.go:927] validating driver "kvm2" against <nil>
	I1217 01:28:59.822107   53516 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:28:59.824319   53516 out.go:203] 
	W1217 01:28:59.825816   53516 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 01:28:59.827230   53516 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-428588 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-428588" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.33:8443
name: running-upgrade-865234
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:28:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.72:8443
name: stopped-upgrade-526226
contexts:
- context:
cluster: running-upgrade-865234
user: running-upgrade-865234
name: running-upgrade-865234
- context:
cluster: stopped-upgrade-526226
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:28:59 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: stopped-upgrade-526226
name: stopped-upgrade-526226
current-context: stopped-upgrade-526226
kind: Config
users:
- name: running-upgrade-865234
user:
client-certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/running-upgrade-865234/client.crt
client-key: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/running-upgrade-865234/client.key
- name: stopped-upgrade-526226
user:
client-certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/stopped-upgrade-526226/client.crt
client-key: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/stopped-upgrade-526226/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-428588

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-428588"

                                                
                                                
----------------------- debugLogs end: false-428588 [took: 3.879777171s] --------------------------------
helpers_test.go:176: Cleaning up "false-428588" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-428588
--- PASS: TestNetworkPlugins/group/false (5.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-526226
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-526226: (1.487903322s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.49s)

                                                
                                    
x
+
TestISOImage/Setup (34.75s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-002863 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-002863 --no-kubernetes --driver=kvm2  --container-runtime=crio: (34.745813166s)
--- PASS: TestISOImage/Setup (34.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (86.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-625875 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-625875 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m26.21657371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (86.22s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (114.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-395127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-395127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m54.380686117s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (114.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-625875 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [6bb13578-94c1-4dcf-86b8-e77ca8f50f5d] Pending
helpers_test.go:353: "busybox" [6bb13578-94c1-4dcf-86b8-e77ca8f50f5d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [6bb13578-94c1-4dcf-86b8-e77ca8f50f5d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003990907s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-625875 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-625875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-625875 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.326318133s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-625875 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-625875 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-625875 --alsologtostderr -v=3: (1m24.021441602s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-771598 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-771598 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m20.07615873s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-395127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [2a1f922d-b5ad-42c3-b3a5-ca299647b673] Pending
helpers_test.go:353: "busybox" [2a1f922d-b5ad-42c3-b3a5-ca299647b673] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [2a1f922d-b5ad-42c3-b3a5-ca299647b673] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004673236s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-395127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-395127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-395127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (84.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-395127 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-395127 --alsologtostderr -v=3: (1m24.540435364s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (84.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-071123 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-071123 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (54.462528799s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-625875 -n old-k8s-version-625875
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-625875 -n old-k8s-version-625875: exit status 7 (70.614656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-625875 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (57.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-625875 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1217 01:32:12.864188   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-625875 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (56.736879501s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-625875 -n old-k8s-version-625875
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (57.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-771598 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b93d2825-a5c6-4c68-b059-d89c149e9a97] Pending
helpers_test.go:353: "busybox" [b93d2825-a5c6-4c68-b059-d89c149e9a97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b93d2825-a5c6-4c68-b059-d89c149e9a97] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005173651s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-771598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-771598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-771598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.082959908s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-771598 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (71.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-771598 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-771598 --alsologtostderr -v=3: (1m11.175808895s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (71.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-071123 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [cbdb62fa-43e9-4c23-889a-1bd607bd30b6] Pending
helpers_test.go:353: "busybox" [cbdb62fa-43e9-4c23-889a-1bd607bd30b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [cbdb62fa-43e9-4c23-889a-1bd607bd30b6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004100484s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-071123 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-071123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-071123 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.059809689s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-071123 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-395127 -n no-preload-395127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-395127 -n no-preload-395127: exit status 7 (67.722891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-395127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (59.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-395127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-395127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (59.43008474s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-395127 -n no-preload-395127
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (59.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (89.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-071123 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-071123 --alsologtostderr -v=3: (1m29.009425477s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (89.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5q97l" [eeca0e56-c96d-4711-ac2d-3c8eb8548106] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5q97l" [eeca0e56-c96d-4711-ac2d-3c8eb8548106] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.004208967s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-5q97l" [eeca0e56-c96d-4711-ac2d-3c8eb8548106] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004290998s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-625875 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-625875 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-625875 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-625875 -n old-k8s-version-625875
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-625875 -n old-k8s-version-625875: exit status 2 (232.115359ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-625875 -n old-k8s-version-625875
E1217 01:33:28.543086   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-625875 -n old-k8s-version-625875: exit status 2 (234.831854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-625875 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-625875 -n old-k8s-version-625875
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-625875 -n old-k8s-version-625875
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-872345 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1217 01:33:44.661546   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/addons-262069/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:33:45.471833   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-698418/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-872345 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (40.965267839s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-771598 -n embed-certs-771598
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-771598 -n embed-certs-771598: exit status 7 (68.255173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-771598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-771598 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-771598 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (44.893405312s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-771598 -n embed-certs-771598
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-k5wlx" [79e29e81-1090-4308-becb-7fedafbd060a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004378654s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-k5wlx" [79e29e81-1090-4308-becb-7fedafbd060a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00853058s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-395127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-872345 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-872345 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.469530244s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-872345 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-872345 --alsologtostderr -v=3: (7.352275724s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-395127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-395127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-395127 -n no-preload-395127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-395127 -n no-preload-395127: exit status 2 (225.026129ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-395127 -n no-preload-395127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-395127 -n no-preload-395127: exit status 2 (226.246031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-395127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-395127 -n no-preload-395127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-395127 -n no-preload-395127
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-872345 -n newest-cni-872345
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-872345 -n newest-cni-872345: exit status 7 (72.651246ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-872345 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-872345 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-872345 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (33.733598728s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-872345 -n newest-cni-872345
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (95.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m35.733890882s)
--- PASS: TestNetworkPlugins/group/auto/Start (95.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8dj6h" [d5f420c6-a6c1-4171-a286-67fd1e423e46] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8dj6h" [d5f420c6-a6c1-4171-a286-67fd1e423e46] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004926641s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123: exit status 7 (75.320224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-071123 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (70.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-071123 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-071123 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m10.145412935s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (70.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8dj6h" [d5f420c6-a6c1-4171-a286-67fd1e423e46] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004642236s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-771598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-771598 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-771598 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-771598 -n embed-certs-771598
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-771598 -n embed-certs-771598: exit status 2 (258.35766ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-771598 -n embed-certs-771598
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-771598 -n embed-certs-771598: exit status 2 (245.049439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-771598 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-771598 -n embed-certs-771598
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-771598 -n embed-certs-771598
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m23.091494971s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-872345 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-872345 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-872345 -n newest-cni-872345
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-872345 -n newest-cni-872345: exit status 2 (305.78075ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-872345 -n newest-cni-872345
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-872345 -n newest-cni-872345: exit status 2 (309.665714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-872345 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-872345 --alsologtostderr -v=1: (1.125802964s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-872345 -n newest-cni-872345
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-872345 -n newest-cni-872345
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (118.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1217 01:35:34.115974   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.122425   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.133930   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.155483   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.197346   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.279339   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.440955   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:34.762602   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:35.404336   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:36.686274   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:39.247744   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:44.369173   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m58.213169863s)
--- PASS: TestNetworkPlugins/group/calico/Start (118.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-62jn2" [b587299c-75ad-4326-b97e-6259c3d0ca66] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-62jn2" [b587299c-75ad-4326-b97e-6259c3d0ca66] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.003789785s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-62jn2" [b587299c-75ad-4326-b97e-6259c3d0ca66] Running
E1217 01:35:54.610769   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004116765s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-071123 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-428588 "pgrep -a kubelet"
I1217 01:35:57.441623   17074 config.go:182] Loaded profile config "auto-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jjb4p" [42ffe862-b770-493b-bcd2-ed67f885b349] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jjb4p" [42ffe862-b770-493b-bcd2-ed67f885b349] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004760307s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-071123 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-071123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-071123 --alsologtostderr -v=1: (1.070051606s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123: exit status 2 (325.868751ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123: exit status 2 (267.884871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-071123 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-071123 -n default-k8s-diff-port-071123
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.68661405s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-dc5dg" [9f93d823-8921-4305-b48e-29ce92cf176a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005868553s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-428588 "pgrep -a kubelet"
I1217 01:36:24.445085   17074 config.go:182] Loaded profile config "kindnet-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-c8rs5" [d7bd55cc-a907-48f2-a44d-d6bf6da7c187] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-c8rs5" [d7bd55cc-a907-48f2-a44d-d6bf6da7c187] Running
E1217 01:36:32.413576   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:32.420057   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:32.431558   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:32.453068   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:32.494603   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:32.576816   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:32.739004   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:33.060511   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:33.701750   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:36:34.983200   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.007115449s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m25.787850645s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1217 01:36:56.054896   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/old-k8s-version-625875/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.16486416s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-sbk4x" [30a65c55-17db-4be0-8bc4-6f83406a90c3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006674875s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-428588 "pgrep -a kubelet"
I1217 01:37:04.988373   17074 config.go:182] Loaded profile config "calico-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jplrm" [f217100b-ca18-4ca7-a0e1-2c2486c43594] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-jplrm" [f217100b-ca18-4ca7-a0e1-2c2486c43594] Running
E1217 01:37:12.863732   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/functional-069802/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:13.390159   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006534248s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-428588 "pgrep -a kubelet"
I1217 01:37:18.700846   17074 config.go:182] Loaded profile config "custom-flannel-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-f7nqx" [0393b1bf-a3ad-4a86-b0ec-c4443d534059] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-f7nqx" [0393b1bf-a3ad-4a86-b0ec-c4443d534059] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006228064s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-428588 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m23.935674158s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.94s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.16s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   commit: 1d20c337b4b256c51c2d46553500e8ea625f1d01
iso_test.go:118:   iso_version: v1.37.0-1765846775-22141
iso_test.go:118:   kicbase_version: v0.0.48-1765661130-22141
iso_test.go:118:   minikube_version: v1.37.0
--- PASS: TestISOImage/VersionJSON (0.16s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-002863 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-428588 "pgrep -a kubelet"
I1217 01:37:52.554866   17074 config.go:182] Loaded profile config "enable-default-cni-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rzfkl" [82b7badc-f572-44dc-8411-981b858eff46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:37:54.351679   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/no-preload-395127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.597491   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.603920   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.615353   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.636843   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.678359   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.760191   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:55.921860   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:56.243803   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-rzfkl" [82b7badc-f572-44dc-8411-981b858eff46] Running
E1217 01:37:56.886066   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:37:58.168421   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:38:00.730012   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.380666216s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-dxjz4" [8e958497-6c03-45d7-bdb8-5eac6b33479e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005129315s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-428588 "pgrep -a kubelet"
I1217 01:38:13.041714   17074 config.go:182] Loaded profile config "flannel-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-5grxc" [7fe9efe4-3a03-4a86-8182-6b253412f821] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:38:16.093706   17074 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/default-k8s-diff-port-071123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-5grxc" [7fe9efe4-3a03-4a86-8182-6b253412f821] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.005203199s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-428588 "pgrep -a kubelet"
I1217 01:38:56.737383   17074 config.go:182] Loaded profile config "bridge-428588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-428588 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wf6tt" [02802932-ca18-4915-a9db-4cb2a73053b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wf6tt" [02802932-ca18-4915-a9db-4cb2a73053b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00482036s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-428588 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-428588 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.33
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
373 TestStartStop/group/disable-driver-mounts 0.16
377 TestNetworkPlugins/group/kubenet 3.93
389 TestNetworkPlugins/group/cilium 3.82
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-262069 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-796781" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-796781
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-428588 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-428588" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:28:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.226:8443
name: cert-expiration-656320
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.33:8443
name: running-upgrade-865234
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:28:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.72:8443
name: stopped-upgrade-526226
contexts:
- context:
cluster: cert-expiration-656320
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:28:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-656320
name: cert-expiration-656320
- context:
cluster: running-upgrade-865234
user: running-upgrade-865234
name: running-upgrade-865234
- context:
cluster: stopped-upgrade-526226
user: stopped-upgrade-526226
name: stopped-upgrade-526226
current-context: cert-expiration-656320
kind: Config
users:
- name: cert-expiration-656320
user:
client-certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/cert-expiration-656320/client.crt
client-key: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/cert-expiration-656320/client.key
- name: running-upgrade-865234
user:
client-certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/running-upgrade-865234/client.crt
client-key: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/running-upgrade-865234/client.key
- name: stopped-upgrade-526226
user:
client-certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/stopped-upgrade-526226/client.crt
client-key: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/stopped-upgrade-526226/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-428588

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-428588"

                                                
                                                
----------------------- debugLogs end: kubenet-428588 [took: 3.747863032s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-428588" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-428588
--- SKIP: TestNetworkPlugins/group/kubenet (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-428588 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-428588" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22168-12839/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 01:27:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.33:8443
name: running-upgrade-865234
contexts:
- context:
cluster: running-upgrade-865234
user: running-upgrade-865234
name: running-upgrade-865234
current-context: ""
kind: Config
users:
- name: running-upgrade-865234
user:
client-certificate: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/running-upgrade-865234/client.crt
client-key: /home/jenkins/minikube-integration/22168-12839/.minikube/profiles/running-upgrade-865234/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-428588

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-428588" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-428588"

                                                
                                                
----------------------- debugLogs end: cilium-428588 [took: 3.662680982s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-428588" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-428588
--- SKIP: TestNetworkPlugins/group/cilium (3.82s)

                                                
                                    
Copied to clipboard