Test Report: KVM_Linux_crio 21664

                    
                      0ce7767ba630d3046e785243932d5087fdf03a88:2025-10-26:42076
                    
                

Test fail (7/323)

x
+
TestAddons/parallel/Ingress (158.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-061252 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-061252 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-061252 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [acf79c32-b924-4f27-be81-436a760fbf38] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [acf79c32-b924-4f27-be81-436a760fbf38] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.002927977s
I1026 14:19:58.058309  141233 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-061252 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.825555437s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-061252 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.34
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-061252 -n addons-061252
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-061252 logs -n 25: (1.171842901s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-183267                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-183267 │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │ 26 Oct 25 14:15 UTC │
	│ start   │ --download-only -p binary-mirror-143028 --alsologtostderr --binary-mirror http://127.0.0.1:45021 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-143028 │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │                     │
	│ delete  │ -p binary-mirror-143028                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-143028 │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │ 26 Oct 25 14:15 UTC │
	│ addons  │ disable dashboard -p addons-061252                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │                     │
	│ addons  │ enable dashboard -p addons-061252                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │                     │
	│ start   │ -p addons-061252 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ addons-061252 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:18 UTC │ 26 Oct 25 14:18 UTC │
	│ addons  │ addons-061252 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ enable headlamp -p addons-061252 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ ssh     │ addons-061252 ssh cat /opt/local-path-provisioner/pvc-aa911efc-959d-403c-96ae-f4cc24f83eca_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:20 UTC │
	│ ip      │ addons-061252 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-061252                                                                                                                                                                                                                                                                                                                                                                                         │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ addons  │ addons-061252 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │ 26 Oct 25 14:19 UTC │
	│ ssh     │ addons-061252 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:19 UTC │                     │
	│ addons  │ addons-061252 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:20 UTC │ 26 Oct 25 14:20 UTC │
	│ addons  │ addons-061252 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:20 UTC │ 26 Oct 25 14:20 UTC │
	│ ip      │ addons-061252 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-061252        │ jenkins │ v1.37.0 │ 26 Oct 25 14:22 UTC │ 26 Oct 25 14:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:15:37
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:15:37.000556  141940 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:15:37.000834  141940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:15:37.000845  141940 out.go:374] Setting ErrFile to fd 2...
	I1026 14:15:37.000851  141940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:15:37.001077  141940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:15:37.001627  141940 out.go:368] Setting JSON to false
	I1026 14:15:37.002518  141940 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3471,"bootTime":1761484666,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:15:37.002610  141940 start.go:141] virtualization: kvm guest
	I1026 14:15:37.004610  141940 out.go:179] * [addons-061252] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:15:37.005751  141940 notify.go:220] Checking for updates...
	I1026 14:15:37.005795  141940 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:15:37.006940  141940 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:15:37.008113  141940 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 14:15:37.009171  141940 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:15:37.010140  141940 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:15:37.011260  141940 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:15:37.012403  141940 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:15:37.042537  141940 out.go:179] * Using the kvm2 driver based on user configuration
	I1026 14:15:37.043572  141940 start.go:305] selected driver: kvm2
	I1026 14:15:37.043591  141940 start.go:925] validating driver "kvm2" against <nil>
	I1026 14:15:37.043606  141940 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:15:37.044383  141940 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:15:37.044691  141940 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:15:37.044721  141940 cni.go:84] Creating CNI manager for ""
	I1026 14:15:37.044778  141940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 14:15:37.044798  141940 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 14:15:37.044858  141940 start.go:349] cluster config:
	{Name:addons-061252 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-061252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1026 14:15:37.044967  141940 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:15:37.046225  141940 out.go:179] * Starting "addons-061252" primary control-plane node in "addons-061252" cluster
	I1026 14:15:37.047173  141940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:37.047223  141940 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 14:15:37.047237  141940 cache.go:58] Caching tarball of preloaded images
	I1026 14:15:37.047355  141940 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 14:15:37.047370  141940 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 14:15:37.047710  141940 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/config.json ...
	I1026 14:15:37.047742  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/config.json: {Name:mkf60473ecda3da8a56b243fdb619702be3a5e0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:15:37.048434  141940 start.go:360] acquireMachinesLock for addons-061252: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 14:15:37.048540  141940 start.go:364] duration metric: took 83.953µs to acquireMachinesLock for "addons-061252"
	I1026 14:15:37.048570  141940 start.go:93] Provisioning new machine with config: &{Name:addons-061252 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-061252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:15:37.048668  141940 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 14:15:37.050021  141940 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1026 14:15:37.050232  141940 start.go:159] libmachine.API.Create for "addons-061252" (driver="kvm2")
	I1026 14:15:37.050271  141940 client.go:168] LocalClient.Create starting
	I1026 14:15:37.050489  141940 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem
	I1026 14:15:37.336950  141940 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem
	I1026 14:15:37.699599  141940 main.go:141] libmachine: creating domain...
	I1026 14:15:37.699626  141940 main.go:141] libmachine: creating network...
	I1026 14:15:37.701194  141940 main.go:141] libmachine: found existing default network
	I1026 14:15:37.701426  141940 main.go:141] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 14:15:37.702680  141940 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001df48e0}
	I1026 14:15:37.702819  141940 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-061252</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 14:15:37.708360  141940 main.go:141] libmachine: creating private network mk-addons-061252 192.168.39.0/24...
	I1026 14:15:37.773073  141940 main.go:141] libmachine: private network mk-addons-061252 192.168.39.0/24 created
	I1026 14:15:37.773503  141940 main.go:141] libmachine: <network>
	  <name>mk-addons-061252</name>
	  <uuid>5380e78c-ad65-4c3f-9e0d-ef0b3c9ef51d</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:bc:f8:db'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 14:15:37.773549  141940 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252 ...
	I1026 14:15:37.773575  141940 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1026 14:15:37.773587  141940 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:15:37.773660  141940 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21664-137233/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1026 14:15:38.078737  141940 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa...
	I1026 14:15:38.198798  141940 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/addons-061252.rawdisk...
	I1026 14:15:38.198846  141940 main.go:141] libmachine: Writing magic tar header
	I1026 14:15:38.198897  141940 main.go:141] libmachine: Writing SSH key tar header
	I1026 14:15:38.198975  141940 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252 ...
	I1026 14:15:38.199038  141940 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252
	I1026 14:15:38.199075  141940 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252 (perms=drwx------)
	I1026 14:15:38.199094  141940 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube/machines
	I1026 14:15:38.199108  141940 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube/machines (perms=drwxr-xr-x)
	I1026 14:15:38.199122  141940 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:15:38.199133  141940 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube (perms=drwxr-xr-x)
	I1026 14:15:38.199144  141940 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233
	I1026 14:15:38.199153  141940 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233 (perms=drwxrwxr-x)
	I1026 14:15:38.199163  141940 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1026 14:15:38.199171  141940 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 14:15:38.199189  141940 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1026 14:15:38.199199  141940 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 14:15:38.199210  141940 main.go:141] libmachine: checking permissions on dir: /home
	I1026 14:15:38.199217  141940 main.go:141] libmachine: skipping /home - not owner
	I1026 14:15:38.199224  141940 main.go:141] libmachine: defining domain...
	I1026 14:15:38.200417  141940 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-061252</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/addons-061252.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-061252'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1026 14:15:38.207359  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:98:fb:07 in network default
	I1026 14:15:38.207926  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:38.207946  141940 main.go:141] libmachine: starting domain...
	I1026 14:15:38.207951  141940 main.go:141] libmachine: ensuring networks are active...
	I1026 14:15:38.208577  141940 main.go:141] libmachine: Ensuring network default is active
	I1026 14:15:38.208951  141940 main.go:141] libmachine: Ensuring network mk-addons-061252 is active
	I1026 14:15:38.209537  141940 main.go:141] libmachine: getting domain XML...
	I1026 14:15:38.210513  141940 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-061252</name>
	  <uuid>f2deb9cc-073c-4b22-8e7b-49ff4094ceba</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/addons-061252.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:76:a9:1d'/>
	      <source network='mk-addons-061252'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:98:fb:07'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 14:15:39.496427  141940 main.go:141] libmachine: waiting for domain to start...
	I1026 14:15:39.497830  141940 main.go:141] libmachine: domain is now running
	I1026 14:15:39.497855  141940 main.go:141] libmachine: waiting for IP...
	I1026 14:15:39.498761  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:39.499449  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:39.499490  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:39.499777  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:39.499836  141940 retry.go:31] will retry after 296.390864ms: waiting for domain to come up
	I1026 14:15:39.798309  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:39.798921  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:39.798933  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:39.799217  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:39.799251  141940 retry.go:31] will retry after 317.48785ms: waiting for domain to come up
	I1026 14:15:40.118827  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:40.119402  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:40.119421  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:40.119713  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:40.119752  141940 retry.go:31] will retry after 339.941766ms: waiting for domain to come up
	I1026 14:15:40.461208  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:40.461749  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:40.461762  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:40.462068  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:40.462101  141940 retry.go:31] will retry after 391.412464ms: waiting for domain to come up
	I1026 14:15:40.854643  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:40.855171  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:40.855187  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:40.855587  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:40.855627  141940 retry.go:31] will retry after 714.998408ms: waiting for domain to come up
	I1026 14:15:41.572611  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:41.573080  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:41.573096  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:41.573501  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:41.573542  141940 retry.go:31] will retry after 663.193936ms: waiting for domain to come up
	I1026 14:15:42.238768  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:42.239195  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:42.239209  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:42.239493  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:42.239534  141940 retry.go:31] will retry after 1.1605093s: waiting for domain to come up
	I1026 14:15:43.401901  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:43.402489  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:43.402507  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:43.402789  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:43.402827  141940 retry.go:31] will retry after 1.248304037s: waiting for domain to come up
	I1026 14:15:44.653156  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:44.653598  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:44.653613  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:44.653905  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:44.653942  141940 retry.go:31] will retry after 1.58006494s: waiting for domain to come up
	I1026 14:15:46.236843  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:46.237482  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:46.237501  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:46.237784  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:46.237840  141940 retry.go:31] will retry after 1.398627308s: waiting for domain to come up
	I1026 14:15:47.638719  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:47.639331  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:47.639350  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:47.639664  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:47.639707  141940 retry.go:31] will retry after 2.860294142s: waiting for domain to come up
	I1026 14:15:50.503882  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:50.504412  141940 main.go:141] libmachine: no network interface addresses found for domain addons-061252 (source=lease)
	I1026 14:15:50.504432  141940 main.go:141] libmachine: trying to list again with source=arp
	I1026 14:15:50.504701  141940 main.go:141] libmachine: unable to find current IP address of domain addons-061252 in network mk-addons-061252 (interfaces detected: [])
	I1026 14:15:50.504748  141940 retry.go:31] will retry after 3.484068187s: waiting for domain to come up
	I1026 14:15:53.991245  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:53.991794  141940 main.go:141] libmachine: domain addons-061252 has current primary IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:53.991808  141940 main.go:141] libmachine: found domain IP: 192.168.39.34
	I1026 14:15:53.991815  141940 main.go:141] libmachine: reserving static IP address...
	I1026 14:15:53.992194  141940 main.go:141] libmachine: unable to find host DHCP lease matching {name: "addons-061252", mac: "52:54:00:76:a9:1d", ip: "192.168.39.34"} in network mk-addons-061252
	I1026 14:15:54.178238  141940 main.go:141] libmachine: reserved static IP address 192.168.39.34 for domain addons-061252
	I1026 14:15:54.178271  141940 main.go:141] libmachine: waiting for SSH...
	I1026 14:15:54.178281  141940 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 14:15:54.180756  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.181128  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:54.181154  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.181328  141940 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:54.181598  141940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1026 14:15:54.181611  141940 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 14:15:54.299414  141940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:15:54.299807  141940 main.go:141] libmachine: domain creation complete
	I1026 14:15:54.301415  141940 machine.go:93] provisionDockerMachine start ...
	I1026 14:15:54.303838  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.304251  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:54.304274  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.304498  141940 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:54.304737  141940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1026 14:15:54.304749  141940 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 14:15:54.420076  141940 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 14:15:54.420111  141940 buildroot.go:166] provisioning hostname "addons-061252"
	I1026 14:15:54.423133  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.423596  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:54.423627  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.423784  141940 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:54.424007  141940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1026 14:15:54.424024  141940 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-061252 && echo "addons-061252" | sudo tee /etc/hostname
	I1026 14:15:54.555525  141940 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-061252
	
	I1026 14:15:54.558244  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.558619  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:54.558636  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.558817  141940 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:54.559012  141940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1026 14:15:54.559027  141940 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-061252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-061252/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-061252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 14:15:54.685996  141940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 14:15:54.686028  141940 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 14:15:54.686054  141940 buildroot.go:174] setting up certificates
	I1026 14:15:54.686091  141940 provision.go:84] configureAuth start
	I1026 14:15:54.689179  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.689601  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:54.689635  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.691870  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.692251  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:54.692279  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:54.692413  141940 provision.go:143] copyHostCerts
	I1026 14:15:54.692522  141940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 14:15:54.692697  141940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 14:15:54.692810  141940 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 14:15:54.692905  141940 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.addons-061252 san=[127.0.0.1 192.168.39.34 addons-061252 localhost minikube]
	I1026 14:15:55.050425  141940 provision.go:177] copyRemoteCerts
	I1026 14:15:55.050507  141940 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 14:15:55.053472  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.053910  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.053935  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.054114  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:15:55.144171  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 14:15:55.173534  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 14:15:55.202934  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 14:15:55.236508  141940 provision.go:87] duration metric: took 550.395383ms to configureAuth
	I1026 14:15:55.236545  141940 buildroot.go:189] setting minikube options for container-runtime
	I1026 14:15:55.236743  141940 config.go:182] Loaded profile config "addons-061252": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:15:55.239290  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.239631  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.239652  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.239835  141940 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:55.240039  141940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1026 14:15:55.240053  141940 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 14:15:55.496648  141940 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 14:15:55.496686  141940 machine.go:96] duration metric: took 1.195243756s to provisionDockerMachine
	I1026 14:15:55.496702  141940 client.go:171] duration metric: took 18.446421463s to LocalClient.Create
	I1026 14:15:55.496725  141940 start.go:167] duration metric: took 18.446493705s to libmachine.API.Create "addons-061252"
	I1026 14:15:55.496740  141940 start.go:293] postStartSetup for "addons-061252" (driver="kvm2")
	I1026 14:15:55.496756  141940 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 14:15:55.496842  141940 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 14:15:55.499769  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.500191  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.500213  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.500380  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:15:55.588960  141940 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 14:15:55.594315  141940 info.go:137] Remote host: Buildroot 2025.02
	I1026 14:15:55.594350  141940 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 14:15:55.594477  141940 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 14:15:55.594528  141940 start.go:296] duration metric: took 97.779902ms for postStartSetup
	I1026 14:15:55.597609  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.598038  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.598071  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.598358  141940 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/config.json ...
	I1026 14:15:55.598598  141940 start.go:128] duration metric: took 18.549914714s to createHost
	I1026 14:15:55.600858  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.601204  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.601227  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.601404  141940 main.go:141] libmachine: Using SSH client type: native
	I1026 14:15:55.601612  141940 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.34 22 <nil> <nil>}
	I1026 14:15:55.601623  141940 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 14:15:55.719573  141940 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761488155.678669376
	
	I1026 14:15:55.719595  141940 fix.go:216] guest clock: 1761488155.678669376
	I1026 14:15:55.719603  141940 fix.go:229] Guest: 2025-10-26 14:15:55.678669376 +0000 UTC Remote: 2025-10-26 14:15:55.598614482 +0000 UTC m=+18.648319539 (delta=80.054894ms)
	I1026 14:15:55.719621  141940 fix.go:200] guest clock delta is within tolerance: 80.054894ms
	I1026 14:15:55.719627  141940 start.go:83] releasing machines lock for "addons-061252", held for 18.671073228s
	I1026 14:15:55.722242  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.722649  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.722673  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.723209  141940 ssh_runner.go:195] Run: cat /version.json
	I1026 14:15:55.723307  141940 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 14:15:55.726227  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.726351  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.726744  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.726747  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:55.726788  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.726807  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:55.726950  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:15:55.727091  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:15:55.842355  141940 ssh_runner.go:195] Run: systemctl --version
	I1026 14:15:55.849229  141940 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 14:15:56.006247  141940 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 14:15:56.013107  141940 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 14:15:56.013202  141940 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 14:15:56.033622  141940 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 14:15:56.033660  141940 start.go:495] detecting cgroup driver to use...
	I1026 14:15:56.033770  141940 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 14:15:56.053875  141940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 14:15:56.071091  141940 docker.go:218] disabling cri-docker service (if available) ...
	I1026 14:15:56.071195  141940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 14:15:56.089314  141940 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 14:15:56.106208  141940 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 14:15:56.253708  141940 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 14:15:56.469061  141940 docker.go:234] disabling docker service ...
	I1026 14:15:56.469148  141940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 14:15:56.488924  141940 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 14:15:56.504546  141940 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 14:15:56.652741  141940 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 14:15:56.791570  141940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 14:15:56.806745  141940 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 14:15:56.829663  141940 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 14:15:56.829729  141940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.841733  141940 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 14:15:56.841803  141940 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.854168  141940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.867618  141940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.881571  141940 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 14:15:56.895877  141940 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.909194  141940 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.932778  141940 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 14:15:56.945877  141940 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 14:15:56.957310  141940 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 14:15:56.957371  141940 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 14:15:56.977888  141940 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 14:15:56.989744  141940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:15:57.129185  141940 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 14:15:57.243835  141940 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 14:15:57.243945  141940 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 14:15:57.249261  141940 start.go:563] Will wait 60s for crictl version
	I1026 14:15:57.249342  141940 ssh_runner.go:195] Run: which crictl
	I1026 14:15:57.253334  141940 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 14:15:57.290962  141940 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 14:15:57.291100  141940 ssh_runner.go:195] Run: crio --version
	I1026 14:15:57.319605  141940 ssh_runner.go:195] Run: crio --version
	I1026 14:15:57.348400  141940 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 14:15:57.352314  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:57.352731  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:15:57.352754  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:15:57.352941  141940 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 14:15:57.357610  141940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:15:57.376515  141940 kubeadm.go:883] updating cluster {Name:addons-061252 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-061252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 14:15:57.376633  141940 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:57.376680  141940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:15:57.417595  141940 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 14:15:57.417715  141940 ssh_runner.go:195] Run: which lz4
	I1026 14:15:57.421880  141940 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 14:15:57.426312  141940 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 14:15:57.426351  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 14:15:58.734447  141940 crio.go:462] duration metric: took 1.31259222s to copy over tarball
	I1026 14:15:58.734547  141940 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 14:16:00.279652  141940 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.54506917s)
	I1026 14:16:00.279679  141940 crio.go:469] duration metric: took 1.545179603s to extract the tarball
	I1026 14:16:00.279687  141940 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 14:16:00.319406  141940 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 14:16:00.362277  141940 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 14:16:00.362299  141940 cache_images.go:85] Images are preloaded, skipping loading
	I1026 14:16:00.362307  141940 kubeadm.go:934] updating node { 192.168.39.34 8443 v1.34.1 crio true true} ...
	I1026 14:16:00.362439  141940 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-061252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-061252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 14:16:00.362553  141940 ssh_runner.go:195] Run: crio config
	I1026 14:16:00.408250  141940 cni.go:84] Creating CNI manager for ""
	I1026 14:16:00.408273  141940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 14:16:00.408297  141940 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 14:16:00.408324  141940 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.34 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-061252 NodeName:addons-061252 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 14:16:00.408448  141940 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-061252"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.34"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.34"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 14:16:00.408533  141940 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 14:16:00.420479  141940 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 14:16:00.420546  141940 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 14:16:00.431579  141940 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1026 14:16:00.450389  141940 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 14:16:00.470154  141940 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1026 14:16:00.489091  141940 ssh_runner.go:195] Run: grep 192.168.39.34	control-plane.minikube.internal$ /etc/hosts
	I1026 14:16:00.493476  141940 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 14:16:00.509089  141940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:16:00.644674  141940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:16:00.663247  141940 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252 for IP: 192.168.39.34
	I1026 14:16:00.663269  141940 certs.go:195] generating shared ca certs ...
	I1026 14:16:00.663287  141940 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:00.663466  141940 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 14:16:00.910919  141940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt ...
	I1026 14:16:00.910948  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt: {Name:mk1a7b1297b09e49fcafa05f97e51a35c75caedf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:00.911121  141940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key ...
	I1026 14:16:00.911133  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key: {Name:mk9b00b5a01bb9eb7959d4744da602898ae02049 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:00.911220  141940 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 14:16:00.987531  141940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt ...
	I1026 14:16:00.987560  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt: {Name:mkcb65c932b1fbbd1b2c33b76ce36f90164452d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:00.987732  141940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key ...
	I1026 14:16:00.987749  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key: {Name:mkd3db7fbbc49d803db2b44252665ca4d24ceb89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:00.988429  141940 certs.go:257] generating profile certs ...
	I1026 14:16:00.988519  141940 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.key
	I1026 14:16:00.988547  141940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt with IP's: []
	I1026 14:16:01.287753  141940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt ...
	I1026 14:16:01.287782  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: {Name:mk5d94200ca614c89706d517d5ee8c2f9359447b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:01.288578  141940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.key ...
	I1026 14:16:01.288599  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.key: {Name:mk40cbb97a692b842d336cc4ff2b649fb2647614 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:01.289083  141940 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.key.27d0a7f0
	I1026 14:16:01.289106  141940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.crt.27d0a7f0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.34]
	I1026 14:16:01.434423  141940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.crt.27d0a7f0 ...
	I1026 14:16:01.434475  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.crt.27d0a7f0: {Name:mkcf6c1d31d7bce6a02af33429805752c222ff7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:01.434684  141940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.key.27d0a7f0 ...
	I1026 14:16:01.434703  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.key.27d0a7f0: {Name:mk436cb4c666b5e8535469da5ee740d604cca3f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:01.434805  141940 certs.go:382] copying /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.crt.27d0a7f0 -> /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.crt
	I1026 14:16:01.434908  141940 certs.go:386] copying /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.key.27d0a7f0 -> /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.key
	I1026 14:16:01.434982  141940 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.key
	I1026 14:16:01.435006  141940 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.crt with IP's: []
	I1026 14:16:01.911037  141940 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.crt ...
	I1026 14:16:01.911070  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.crt: {Name:mk75626e195f41db10ee1ed28fdd831d60a9f3f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:01.911294  141940 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.key ...
	I1026 14:16:01.911316  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.key: {Name:mk6d33bca62e3f690b7a52ce1929a7aabc93eab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:01.911562  141940 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 14:16:01.911607  141940 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 14:16:01.911645  141940 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 14:16:01.911674  141940 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 14:16:01.912418  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 14:16:01.946290  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 14:16:01.977116  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 14:16:02.007672  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 14:16:02.037629  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 14:16:02.068472  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 14:16:02.098071  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 14:16:02.126953  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 14:16:02.158171  141940 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 14:16:02.190429  141940 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 14:16:02.211951  141940 ssh_runner.go:195] Run: openssl version
	I1026 14:16:02.218148  141940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 14:16:02.231079  141940 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:16:02.236035  141940 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:16:02.236105  141940 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 14:16:02.243105  141940 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 14:16:02.259552  141940 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 14:16:02.265195  141940 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 14:16:02.265264  141940 kubeadm.go:400] StartCluster: {Name:addons-061252 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-061252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:16:02.265358  141940 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 14:16:02.265488  141940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 14:16:02.313447  141940 cri.go:89] found id: ""
	I1026 14:16:02.313550  141940 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 14:16:02.325971  141940 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 14:16:02.337881  141940 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 14:16:02.349249  141940 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 14:16:02.349279  141940 kubeadm.go:157] found existing configuration files:
	
	I1026 14:16:02.349335  141940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 14:16:02.360218  141940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 14:16:02.360289  141940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 14:16:02.371578  141940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 14:16:02.381886  141940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 14:16:02.381941  141940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 14:16:02.393101  141940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 14:16:02.403762  141940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 14:16:02.403826  141940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 14:16:02.415405  141940 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 14:16:02.426202  141940 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 14:16:02.426279  141940 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 14:16:02.437521  141940 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 14:16:02.482377  141940 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 14:16:02.482485  141940 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 14:16:02.572145  141940 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 14:16:02.572278  141940 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 14:16:02.572448  141940 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 14:16:02.584293  141940 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 14:16:02.736032  141940 out.go:252]   - Generating certificates and keys ...
	I1026 14:16:02.736139  141940 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 14:16:02.736208  141940 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 14:16:02.847767  141940 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 14:16:03.036119  141940 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 14:16:03.180864  141940 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 14:16:03.397946  141940 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 14:16:03.578790  141940 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 14:16:03.578929  141940 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-061252 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I1026 14:16:03.675613  141940 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 14:16:03.675780  141940 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-061252 localhost] and IPs [192.168.39.34 127.0.0.1 ::1]
	I1026 14:16:04.179740  141940 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 14:16:04.645056  141940 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 14:16:05.185778  141940 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 14:16:05.185886  141940 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 14:16:05.446285  141940 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 14:16:05.682002  141940 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 14:16:05.807903  141940 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 14:16:06.254177  141940 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 14:16:06.386084  141940 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 14:16:06.386646  141940 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 14:16:06.390938  141940 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 14:16:06.392914  141940 out.go:252]   - Booting up control plane ...
	I1026 14:16:06.393012  141940 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 14:16:06.393121  141940 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 14:16:06.393198  141940 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 14:16:06.409830  141940 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 14:16:06.409975  141940 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 14:16:06.415293  141940 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 14:16:06.415654  141940 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 14:16:06.415716  141940 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 14:16:06.574857  141940 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 14:16:06.574986  141940 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 14:16:07.575880  141940 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001984228s
	I1026 14:16:07.578402  141940 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 14:16:07.578552  141940 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.34:8443/livez
	I1026 14:16:07.578659  141940 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 14:16:07.578820  141940 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 14:16:09.786297  141940 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.209868767s
	I1026 14:16:10.865629  141940 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.29043034s
	I1026 14:16:12.575764  141940 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.001979948s
	I1026 14:16:12.597140  141940 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 14:16:12.607270  141940 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 14:16:12.620122  141940 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 14:16:12.620349  141940 kubeadm.go:318] [mark-control-plane] Marking the node addons-061252 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 14:16:12.633555  141940 kubeadm.go:318] [bootstrap-token] Using token: xfvrau.i99xe1wj840dw662
	I1026 14:16:12.635695  141940 out.go:252]   - Configuring RBAC rules ...
	I1026 14:16:12.635800  141940 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 14:16:12.639554  141940 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 14:16:12.648196  141940 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 14:16:12.651046  141940 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 14:16:12.654266  141940 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 14:16:12.657310  141940 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 14:16:12.982322  141940 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 14:16:13.408835  141940 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 14:16:13.984374  141940 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 14:16:13.988506  141940 kubeadm.go:318] 
	I1026 14:16:13.988643  141940 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 14:16:13.988667  141940 kubeadm.go:318] 
	I1026 14:16:13.988786  141940 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 14:16:13.988797  141940 kubeadm.go:318] 
	I1026 14:16:13.988831  141940 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 14:16:13.988932  141940 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 14:16:13.988987  141940 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 14:16:13.988994  141940 kubeadm.go:318] 
	I1026 14:16:13.989048  141940 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 14:16:13.989058  141940 kubeadm.go:318] 
	I1026 14:16:13.989130  141940 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 14:16:13.989140  141940 kubeadm.go:318] 
	I1026 14:16:13.989239  141940 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 14:16:13.989371  141940 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 14:16:13.989505  141940 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 14:16:13.989519  141940 kubeadm.go:318] 
	I1026 14:16:13.989630  141940 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 14:16:13.989728  141940 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 14:16:13.989742  141940 kubeadm.go:318] 
	I1026 14:16:13.989833  141940 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token xfvrau.i99xe1wj840dw662 \
	I1026 14:16:13.989964  141940 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 14:16:13.989987  141940 kubeadm.go:318] 	--control-plane 
	I1026 14:16:13.989991  141940 kubeadm.go:318] 
	I1026 14:16:13.990099  141940 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 14:16:13.990108  141940 kubeadm.go:318] 
	I1026 14:16:13.990206  141940 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token xfvrau.i99xe1wj840dw662 \
	I1026 14:16:13.990342  141940 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 14:16:13.994027  141940 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 14:16:13.994112  141940 cni.go:84] Creating CNI manager for ""
	I1026 14:16:13.994137  141940 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 14:16:13.995896  141940 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 14:16:13.996962  141940 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 14:16:14.009698  141940 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 14:16:14.035449  141940 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 14:16:14.035595  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:14.035608  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-061252 minikube.k8s.io/updated_at=2025_10_26T14_16_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=addons-061252 minikube.k8s.io/primary=true
	I1026 14:16:14.085337  141940 ops.go:34] apiserver oom_adj: -16
	I1026 14:16:14.176585  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:14.677442  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:15.176729  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:15.677528  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:16.177651  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:16.676675  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:17.176767  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:17.677572  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:18.177436  141940 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 14:16:18.256798  141940 kubeadm.go:1113] duration metric: took 4.221291605s to wait for elevateKubeSystemPrivileges
	I1026 14:16:18.256846  141940 kubeadm.go:402] duration metric: took 15.991590761s to StartCluster
	I1026 14:16:18.256872  141940 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:18.257028  141940 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 14:16:18.257474  141940 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 14:16:18.257700  141940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 14:16:18.257723  141940 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.34 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 14:16:18.257790  141940 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 14:16:18.257905  141940 config.go:182] Loaded profile config "addons-061252": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:18.257937  141940 addons.go:69] Setting yakd=true in profile "addons-061252"
	I1026 14:16:18.257949  141940 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-061252"
	I1026 14:16:18.257957  141940 addons.go:69] Setting cloud-spanner=true in profile "addons-061252"
	I1026 14:16:18.257971  141940 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-061252"
	I1026 14:16:18.257973  141940 addons.go:238] Setting addon cloud-spanner=true in "addons-061252"
	I1026 14:16:18.257981  141940 addons.go:69] Setting gcp-auth=true in profile "addons-061252"
	I1026 14:16:18.257968  141940 addons.go:69] Setting storage-provisioner=true in profile "addons-061252"
	I1026 14:16:18.258015  141940 addons.go:69] Setting ingress=true in profile "addons-061252"
	I1026 14:16:18.257987  141940 addons.go:69] Setting registry=true in profile "addons-061252"
	I1026 14:16:18.258022  141940 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-061252"
	I1026 14:16:18.258029  141940 addons.go:238] Setting addon ingress=true in "addons-061252"
	I1026 14:16:18.258031  141940 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-061252"
	I1026 14:16:18.258037  141940 addons.go:238] Setting addon storage-provisioner=true in "addons-061252"
	I1026 14:16:18.258047  141940 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-061252"
	I1026 14:16:18.258063  141940 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-061252"
	I1026 14:16:18.258078  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.257931  141940 addons.go:69] Setting default-storageclass=true in profile "addons-061252"
	I1026 14:16:18.258089  141940 addons.go:69] Setting inspektor-gadget=true in profile "addons-061252"
	I1026 14:16:18.258094  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258101  141940 addons.go:238] Setting addon inspektor-gadget=true in "addons-061252"
	I1026 14:16:18.258116  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258008  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258578  141940 addons.go:69] Setting metrics-server=true in profile "addons-061252"
	I1026 14:16:18.258605  141940 addons.go:238] Setting addon metrics-server=true in "addons-061252"
	I1026 14:16:18.258631  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258705  141940 addons.go:238] Setting addon registry=true in "addons-061252"
	I1026 14:16:18.258723  141940 addons.go:69] Setting volumesnapshots=true in profile "addons-061252"
	I1026 14:16:18.258742  141940 addons.go:238] Setting addon volumesnapshots=true in "addons-061252"
	I1026 14:16:18.258767  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258779  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258114  141940 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-061252"
	I1026 14:16:18.258077  141940 addons.go:238] Setting addon yakd=true in "addons-061252"
	I1026 14:16:18.259416  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.259615  141940 out.go:179] * Verifying Kubernetes components...
	I1026 14:16:18.257963  141940 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-061252"
	I1026 14:16:18.259921  141940 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-061252"
	I1026 14:16:18.259953  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258016  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.259674  141940 addons.go:69] Setting ingress-dns=true in profile "addons-061252"
	I1026 14:16:18.260167  141940 addons.go:238] Setting addon ingress-dns=true in "addons-061252"
	I1026 14:16:18.260203  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.259693  141940 addons.go:69] Setting registry-creds=true in profile "addons-061252"
	I1026 14:16:18.260242  141940 addons.go:238] Setting addon registry-creds=true in "addons-061252"
	I1026 14:16:18.260269  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.258019  141940 addons.go:69] Setting volcano=true in profile "addons-061252"
	I1026 14:16:18.260511  141940 addons.go:238] Setting addon volcano=true in "addons-061252"
	I1026 14:16:18.260541  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.261002  141940 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 14:16:18.258011  141940 mustload.go:65] Loading cluster: addons-061252
	I1026 14:16:18.258083  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.261233  141940 config.go:182] Loaded profile config "addons-061252": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:16:18.265341  141940 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 14:16:18.266428  141940 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 14:16:18.266419  141940 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 14:16:18.266468  141940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:16:18.266498  141940 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1026 14:16:18.266562  141940 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 14:16:18.266582  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 14:16:18.267662  141940 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 14:16:18.268287  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 14:16:18.268305  141940 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 14:16:18.268681  141940 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 14:16:18.268308  141940 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 14:16:18.268742  141940 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 14:16:18.268340  141940 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 14:16:18.268979  141940 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 14:16:18.268992  141940 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 14:16:18.269043  141940 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:16:18.269264  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W1026 14:16:18.269090  141940 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 14:16:18.269488  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.269545  141940 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 14:16:18.269554  141940 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 14:16:18.269564  141940 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 14:16:18.269550  141940 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 14:16:18.269576  141940 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 14:16:18.269589  141940 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 14:16:18.270651  141940 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 14:16:18.269598  141940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 14:16:18.270832  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 14:16:18.270864  141940 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:16:18.271287  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 14:16:18.270885  141940 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:16:18.271364  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 14:16:18.270889  141940 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:16:18.271423  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 14:16:18.271592  141940 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 14:16:18.271607  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 14:16:18.272445  141940 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:16:18.272472  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 14:16:18.273585  141940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:16:18.273594  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 14:16:18.274653  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 14:16:18.274767  141940 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:16:18.274790  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 14:16:18.275874  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.276583  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 14:16:18.277626  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 14:16:18.278062  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.278113  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.278728  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.278644  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.278986  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.279254  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.279874  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 14:16:18.280089  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.280102  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.280116  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.280208  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.280260  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.280445  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.280489  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.280779  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.281003  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.281081  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.281547  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.281725  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.281754  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.282094  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.282238  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.282332  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.282357  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.282623  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.282632  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.282686  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.282865  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.282884  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.282908  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.283267  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.283264  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.283320  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.283358  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.283504  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.283541  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.283774  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.283764  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.283885  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.283916  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.284271  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.284318  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.284355  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.284560  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.284561  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.285054  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.285062  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 14:16:18.285084  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.285280  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.287319  141940 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 14:16:18.288208  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 14:16:18.288224  141940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 14:16:18.290816  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.291197  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.291228  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.291408  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.311116  141940 addons.go:238] Setting addon default-storageclass=true in "addons-061252"
	I1026 14:16:18.311172  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.311117  141940 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-061252"
	I1026 14:16:18.311270  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:18.312962  141940 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 14:16:18.312980  141940 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 14:16:18.314696  141940 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 14:16:18.315935  141940 out.go:179]   - Using image docker.io/busybox:stable
	I1026 14:16:18.315967  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.316472  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.316508  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.316772  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:18.317013  141940 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:16:18.317033  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 14:16:18.319444  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.319799  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:18.319821  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:18.319948  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	W1026 14:16:18.455144  141940 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53154->192.168.39.34:22: read: connection reset by peer
	I1026 14:16:18.455187  141940 retry.go:31] will retry after 290.367725ms: ssh: handshake failed: read tcp 192.168.39.1:53154->192.168.39.34:22: read: connection reset by peer
	I1026 14:16:18.820748  141940 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:18.820773  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 14:16:18.851747  141940 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 14:16:18.851796  141940 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 14:16:18.889901  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 14:16:18.889933  141940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 14:16:18.910738  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 14:16:18.930767  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 14:16:18.958430  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 14:16:18.962866  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 14:16:19.019020  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 14:16:19.084753  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 14:16:19.093621  141940 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 14:16:19.093647  141940 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 14:16:19.129645  141940 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 14:16:19.129670  141940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 14:16:19.181586  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 14:16:19.204131  141940 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 14:16:19.204156  141940 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 14:16:19.225996  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:19.254913  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 14:16:19.309598  141940 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 14:16:19.309625  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 14:16:19.458997  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 14:16:19.459028  141940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 14:16:19.642822  141940 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:16:19.642857  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 14:16:19.706977  141940 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 14:16:19.707006  141940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 14:16:19.818137  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 14:16:19.941988  141940 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 14:16:19.942015  141940 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 14:16:20.015503  141940 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 14:16:20.015532  141940 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 14:16:20.105930  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 14:16:20.105959  141940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 14:16:20.253987  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 14:16:20.372325  141940 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 14:16:20.372353  141940 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 14:16:20.649982  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 14:16:20.650045  141940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 14:16:20.680150  141940 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 14:16:20.680181  141940 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 14:16:20.705035  141940 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:16:20.705067  141940 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 14:16:20.758040  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 14:16:20.758067  141940 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 14:16:20.901237  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 14:16:20.988178  141940 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 14:16:20.988217  141940 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 14:16:21.035928  141940 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:16:21.035963  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 14:16:21.160278  141940 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:16:21.160301  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 14:16:21.296617  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 14:16:21.315671  141940 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 14:16:21.315694  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 14:16:21.401390  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:16:21.707870  141940 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 14:16:21.707912  141940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 14:16:21.995791  141940 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 14:16:21.995822  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 14:16:22.069707  141940 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.21790918s)
	I1026 14:16:22.069766  141940 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.217937542s)
	I1026 14:16:22.069795  141940 start.go:976] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1026 14:16:22.069825  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.159043712s)
	I1026 14:16:22.069893  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.13909716s)
	I1026 14:16:22.070893  141940 node_ready.go:35] waiting up to 6m0s for node "addons-061252" to be "Ready" ...
	I1026 14:16:22.078423  141940 node_ready.go:49] node "addons-061252" is "Ready"
	I1026 14:16:22.078479  141940 node_ready.go:38] duration metric: took 7.531444ms for node "addons-061252" to be "Ready" ...
	I1026 14:16:22.078500  141940 api_server.go:52] waiting for apiserver process to appear ...
	I1026 14:16:22.078577  141940 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:16:22.386542  141940 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 14:16:22.386565  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 14:16:22.570839  141940 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:16:22.570867  141940 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 14:16:22.584101  141940 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-061252" context rescaled to 1 replicas
	I1026 14:16:22.710902  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 14:16:23.717981  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.755083943s)
	I1026 14:16:23.718063  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.759590052s)
	I1026 14:16:23.718164  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.699120746s)
	I1026 14:16:23.718234  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.633452645s)
	I1026 14:16:23.718285  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.536665385s)
	I1026 14:16:25.237789  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.011739157s)
	W1026 14:16:25.237837  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:25.237863  141940 retry.go:31] will retry after 175.760715ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:25.237873  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.98293576s)
	I1026 14:16:25.414052  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:25.732936  141940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 14:16:25.735515  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:25.735960  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:25.735985  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:25.736112  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:26.059650  141940 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 14:16:26.194604  141940 addons.go:238] Setting addon gcp-auth=true in "addons-061252"
	I1026 14:16:26.194661  141940 host.go:66] Checking if "addons-061252" exists ...
	I1026 14:16:26.196512  141940 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 14:16:26.199190  141940 main.go:141] libmachine: domain addons-061252 has defined MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:26.199643  141940 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:76:a9:1d", ip: ""} in network mk-addons-061252: {Iface:virbr1 ExpiryTime:2025-10-26 15:15:52 +0000 UTC Type:0 Mac:52:54:00:76:a9:1d Iaid: IPaddr:192.168.39.34 Prefix:24 Hostname:addons-061252 Clientid:01:52:54:00:76:a9:1d}
	I1026 14:16:26.199665  141940 main.go:141] libmachine: domain addons-061252 has defined IP address 192.168.39.34 and MAC address 52:54:00:76:a9:1d in network mk-addons-061252
	I1026 14:16:26.199800  141940 sshutil.go:53] new ssh client: &{IP:192.168.39.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/addons-061252/id_rsa Username:docker}
	I1026 14:16:26.642663  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.388628936s)
	I1026 14:16:26.642715  141940 addons.go:479] Verifying addon registry=true in "addons-061252"
	I1026 14:16:26.642746  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.74147999s)
	I1026 14:16:26.642762  141940 addons.go:479] Verifying addon metrics-server=true in "addons-061252"
	I1026 14:16:26.642850  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.346187927s)
	I1026 14:16:26.644152  141940 out.go:179] * Verifying registry addon...
	I1026 14:16:26.644151  141940 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-061252 service yakd-dashboard -n yakd-dashboard
	
	I1026 14:16:26.645798  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.827607281s)
	I1026 14:16:26.645837  141940 addons.go:479] Verifying addon ingress=true in "addons-061252"
	I1026 14:16:26.646092  141940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 14:16:26.646975  141940 out.go:179] * Verifying ingress addon...
	I1026 14:16:26.649047  141940 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 14:16:26.683176  141940 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 14:16:26.683218  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:26.683747  141940 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 14:16:26.683775  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:26.761568  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.360126689s)
	W1026 14:16:26.761626  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:16:26.761647  141940 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.683041726s)
	I1026 14:16:26.761657  141940 retry.go:31] will retry after 326.943012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 14:16:26.761692  141940 api_server.go:72] duration metric: took 8.503935621s to wait for apiserver process to appear ...
	I1026 14:16:26.761706  141940 api_server.go:88] waiting for apiserver healthz status ...
	I1026 14:16:26.761740  141940 api_server.go:253] Checking apiserver healthz at https://192.168.39.34:8443/healthz ...
	I1026 14:16:26.792537  141940 api_server.go:279] https://192.168.39.34:8443/healthz returned 200:
	ok
	I1026 14:16:26.799316  141940 api_server.go:141] control plane version: v1.34.1
	I1026 14:16:26.799344  141940 api_server.go:131] duration metric: took 37.631431ms to wait for apiserver health ...
	I1026 14:16:26.799355  141940 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 14:16:26.833380  141940 system_pods.go:59] 17 kube-system pods found
	I1026 14:16:26.833423  141940 system_pods.go:61] "amd-gpu-device-plugin-52p56" [d91ddf2b-867e-4c39-9243-20c81a38f82d] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:16:26.833431  141940 system_pods.go:61] "coredns-66bc5c9577-lq8mf" [28a691d1-64fb-44e3-8bff-c087e6941e32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:26.833439  141940 system_pods.go:61] "coredns-66bc5c9577-wv2kq" [0e7d6507-2df4-4ffd-b909-aad209110ad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:26.833443  141940 system_pods.go:61] "etcd-addons-061252" [a76f555d-afb1-45d7-957e-48b34bc80e56] Running
	I1026 14:16:26.833448  141940 system_pods.go:61] "kube-apiserver-addons-061252" [7eba2271-02b1-415d-913f-a680e6f7ebeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 14:16:26.833452  141940 system_pods.go:61] "kube-controller-manager-addons-061252" [07f372aa-4be9-4ef0-bf89-0a78f6182378] Running
	I1026 14:16:26.833471  141940 system_pods.go:61] "kube-ingress-dns-minikube" [36b8ea46-a7ec-4896-9b65-f41a07fe5e13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:26.833474  141940 system_pods.go:61] "kube-proxy-ltxkd" [6476cb31-99c8-4ed0-88ec-1260d6304141] Running
	I1026 14:16:26.833479  141940 system_pods.go:61] "kube-scheduler-addons-061252" [3c563936-5ad6-46ba-a396-40e84f3c3001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 14:16:26.833484  141940 system_pods.go:61] "metrics-server-85b7d694d7-jgpx5" [8b47107f-7c68-4a56-82cf-e908c35fc406] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:26.833489  141940 system_pods.go:61] "nvidia-device-plugin-daemonset-6wtxh" [b47844e1-10f4-4b23-ae63-5df39995a764] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:26.833493  141940 system_pods.go:61] "registry-6b586f9694-cbv4c" [7d3cca1e-f530-4267-a552-8536b1621127] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:26.833498  141940 system_pods.go:61] "registry-creds-764b6fb674-sdhxc" [b4b55668-849c-4df3-a4ca-04f628b6f383] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:26.833505  141940 system_pods.go:61] "registry-proxy-rst9d" [5d630e1a-522c-4021-aa39-21738869a7c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:26.833509  141940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l5vwz" [43936a43-2c01-40e8-9b21-df25d6fb426b] Pending
	I1026 14:16:26.833513  141940 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vfgn8" [ae87f8ca-5aa9-46ed-81c7-8808bcc323c5] Pending
	I1026 14:16:26.833516  141940 system_pods.go:61] "storage-provisioner" [fb816fec-acef-4c73-bceb-27ec431327d7] Running
	I1026 14:16:26.833523  141940 system_pods.go:74] duration metric: took 34.162202ms to wait for pod list to return data ...
	I1026 14:16:26.833534  141940 default_sa.go:34] waiting for default service account to be created ...
	I1026 14:16:26.864048  141940 default_sa.go:45] found service account: "default"
	I1026 14:16:26.864075  141940 default_sa.go:55] duration metric: took 30.535309ms for default service account to be created ...
	I1026 14:16:26.864083  141940 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 14:16:26.914036  141940 system_pods.go:86] 17 kube-system pods found
	I1026 14:16:26.914069  141940 system_pods.go:89] "amd-gpu-device-plugin-52p56" [d91ddf2b-867e-4c39-9243-20c81a38f82d] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 14:16:26.914076  141940 system_pods.go:89] "coredns-66bc5c9577-lq8mf" [28a691d1-64fb-44e3-8bff-c087e6941e32] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:26.914085  141940 system_pods.go:89] "coredns-66bc5c9577-wv2kq" [0e7d6507-2df4-4ffd-b909-aad209110ad9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 14:16:26.914091  141940 system_pods.go:89] "etcd-addons-061252" [a76f555d-afb1-45d7-957e-48b34bc80e56] Running
	I1026 14:16:26.914100  141940 system_pods.go:89] "kube-apiserver-addons-061252" [7eba2271-02b1-415d-913f-a680e6f7ebeb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 14:16:26.914110  141940 system_pods.go:89] "kube-controller-manager-addons-061252" [07f372aa-4be9-4ef0-bf89-0a78f6182378] Running
	I1026 14:16:26.914120  141940 system_pods.go:89] "kube-ingress-dns-minikube" [36b8ea46-a7ec-4896-9b65-f41a07fe5e13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 14:16:26.914125  141940 system_pods.go:89] "kube-proxy-ltxkd" [6476cb31-99c8-4ed0-88ec-1260d6304141] Running
	I1026 14:16:26.914139  141940 system_pods.go:89] "kube-scheduler-addons-061252" [3c563936-5ad6-46ba-a396-40e84f3c3001] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 14:16:26.914144  141940 system_pods.go:89] "metrics-server-85b7d694d7-jgpx5" [8b47107f-7c68-4a56-82cf-e908c35fc406] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 14:16:26.914157  141940 system_pods.go:89] "nvidia-device-plugin-daemonset-6wtxh" [b47844e1-10f4-4b23-ae63-5df39995a764] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 14:16:26.914167  141940 system_pods.go:89] "registry-6b586f9694-cbv4c" [7d3cca1e-f530-4267-a552-8536b1621127] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 14:16:26.914171  141940 system_pods.go:89] "registry-creds-764b6fb674-sdhxc" [b4b55668-849c-4df3-a4ca-04f628b6f383] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 14:16:26.914178  141940 system_pods.go:89] "registry-proxy-rst9d" [5d630e1a-522c-4021-aa39-21738869a7c4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 14:16:26.914182  141940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5vwz" [43936a43-2c01-40e8-9b21-df25d6fb426b] Pending
	I1026 14:16:26.914188  141940 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vfgn8" [ae87f8ca-5aa9-46ed-81c7-8808bcc323c5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 14:16:26.914196  141940 system_pods.go:89] "storage-provisioner" [fb816fec-acef-4c73-bceb-27ec431327d7] Running
	I1026 14:16:26.914207  141940 system_pods.go:126] duration metric: took 50.117551ms to wait for k8s-apps to be running ...
	I1026 14:16:26.914222  141940 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 14:16:26.914281  141940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:16:27.089392  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 14:16:27.164994  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:27.187073  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:27.543527  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.832524704s)
	I1026 14:16:27.543578  141940 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-061252"
	I1026 14:16:27.545058  141940 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 14:16:27.546907  141940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 14:16:27.572680  141940 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 14:16:27.572703  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:27.655405  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:27.664269  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.052709  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:28.101089  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.686996196s)
	I1026 14:16:28.101125  141940 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.904585105s)
	W1026 14:16:28.101149  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:28.101181  141940 retry.go:31] will retry after 384.853072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:28.101205  141940 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.186895874s)
	I1026 14:16:28.101245  141940 system_svc.go:56] duration metric: took 1.187018871s WaitForService to wait for kubelet
	I1026 14:16:28.101260  141940 kubeadm.go:586] duration metric: took 9.843501779s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 14:16:28.101290  141940 node_conditions.go:102] verifying NodePressure condition ...
	I1026 14:16:28.102654  141940 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 14:16:28.103773  141940 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 14:16:28.104690  141940 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 14:16:28.104706  141940 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 14:16:28.106363  141940 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 14:16:28.106389  141940 node_conditions.go:123] node cpu capacity is 2
	I1026 14:16:28.106406  141940 node_conditions.go:105] duration metric: took 5.108512ms to run NodePressure ...
	I1026 14:16:28.106423  141940 start.go:241] waiting for startup goroutines ...
	I1026 14:16:28.149473  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:28.152939  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.165962  141940 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 14:16:28.165986  141940 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 14:16:28.230532  141940 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:16:28.230557  141940 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 14:16:28.283897  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 14:16:28.487209  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:28.552264  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:28.653010  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:28.653082  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.053017  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:29.149836  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.153329  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:29.446369  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.356918275s)
	I1026 14:16:29.551970  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:29.696681  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:29.697394  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:29.771431  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.487477818s)
	I1026 14:16:29.772382  141940 addons.go:479] Verifying addon gcp-auth=true in "addons-061252"
	I1026 14:16:29.773689  141940 out.go:179] * Verifying gcp-auth addon...
	I1026 14:16:29.775308  141940 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 14:16:29.790892  141940 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 14:16:29.790911  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.051665  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:30.151174  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:30.155535  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:30.278802  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.355921  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.868651991s)
	W1026 14:16:30.355971  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:30.355998  141940 retry.go:31] will retry after 537.68022ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:30.553818  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:30.654682  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:30.658412  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:30.781025  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:30.894278  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:31.053318  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:31.151640  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:31.155579  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:31.281030  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:31.553972  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:31.654746  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:31.655515  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:31.779328  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:32.053929  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:32.140491  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.246173663s)
	W1026 14:16:32.140527  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:32.140555  141940 retry.go:31] will retry after 1.154779112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:32.152970  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:32.156486  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:32.279598  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:32.555567  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:32.654749  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:32.655010  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:32.782318  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:33.052218  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:33.151300  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:33.155189  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:33.279076  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:33.296234  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:33.552740  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:33.654988  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:33.655210  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:33.779955  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:34.059520  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:34.150485  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:34.153709  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:34.279168  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:34.322673  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.026396696s)
	W1026 14:16:34.322719  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:34.322746  141940 retry.go:31] will retry after 1.714857965s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:34.553248  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:34.649776  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:34.652918  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:34.781018  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:35.051567  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:35.150511  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:35.156523  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:35.278654  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:35.551264  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:35.649873  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:35.652672  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:35.779779  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:36.038146  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:36.052291  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:36.148820  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:36.152560  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:36.280202  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:36.550338  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:36.655881  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:36.656007  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:36.780936  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:37.075193  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:37.149660  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:37.151349  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:37.268966  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.230779502s)
	W1026 14:16:37.269006  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:37.269026  141940 retry.go:31] will retry after 1.463805606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:37.278977  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:37.552366  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:37.656620  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:37.656919  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:37.779110  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.059216  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:38.150968  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:38.155473  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:38.279585  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.551716  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:38.733308  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:38.816225  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:38.817920  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:38.818176  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.051044  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:39.154473  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.154599  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:39.278910  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:39.549964  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1026 14:16:39.556104  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:39.556140  141940 retry.go:31] will retry after 3.867326235s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:39.651978  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:39.653218  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:39.780412  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:40.052265  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:40.154247  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:40.155502  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:40.279001  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:40.550027  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:40.650052  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:40.651656  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:40.780204  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:41.354436  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:41.354568  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:41.355048  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:41.355544  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:41.570501  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:41.651449  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:41.653182  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:41.780375  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:42.052577  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:42.151978  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:42.154350  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:42.283799  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:42.721661  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:42.721660  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:42.723085  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:42.779900  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:43.050955  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:43.150875  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:43.152648  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:43.282875  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:43.424252  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:43.553773  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:43.650920  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:43.653537  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:43.779983  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:44.051498  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:44.150387  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:44.155053  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:44.280895  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:44.309316  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:44.309347  141940 retry.go:31] will retry after 4.619232096s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:44.550680  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:44.655929  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:44.656153  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:44.779293  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:45.050999  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:45.150229  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:45.152255  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:45.278955  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:45.550224  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:45.650129  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:45.652026  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:45.778388  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.051939  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:46.150612  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:46.153676  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:46.280441  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:46.551335  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:46.650001  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:46.652089  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:46.779366  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:47.051589  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:47.149704  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:47.152622  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:47.278710  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:47.551551  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:47.652838  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:47.653566  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:47.777978  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.050296  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:48.148936  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:48.152720  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:48.278837  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.550665  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:48.650302  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:48.652235  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:48.779059  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:48.929328  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:49.052918  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:49.154803  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:49.156317  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:49.281322  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:49.553691  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:49.650094  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:49.655362  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:49.778890  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:49.891848  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:49.891883  141940 retry.go:31] will retry after 3.715235148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:50.052006  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:50.155379  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:50.156311  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:50.279022  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:50.551660  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:50.650625  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:50.652171  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:50.779788  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:51.051292  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:51.152857  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:51.152864  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:51.288024  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:51.556982  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:51.650274  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:51.652935  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:51.781620  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:52.053061  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:52.154014  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:52.160965  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:52.280147  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:52.552101  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:52.650161  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:52.652929  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:52.779472  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:53.051325  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:53.258092  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:53.258300  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:53.280498  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:53.554435  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:53.607415  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:16:53.653974  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:53.654051  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:53.782484  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:54.053701  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:54.149315  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:54.151292  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:54.280432  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:16:54.345930  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:54.345973  141940 retry.go:31] will retry after 7.240639027s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:16:54.550938  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:54.649899  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:54.651860  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:54.779591  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:55.051473  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:55.149420  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:55.151493  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:55.278161  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:55.550228  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:55.650267  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:55.651479  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:55.778774  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:56.051421  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:56.149561  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:56.152734  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:56.279416  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:56.551785  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:56.650708  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:56.653250  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:56.780572  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:57.051883  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:57.151184  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:57.153363  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:57.278780  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:57.551179  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:57.652716  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:57.652891  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:57.778583  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:58.051114  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:58.149814  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:58.152314  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:58.278960  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:58.550990  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:58.649851  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:58.652113  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:58.779640  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:59.051253  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:59.148778  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:59.151792  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:59.278951  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:16:59.550469  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:16:59.649568  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:16:59.651847  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:16:59.779533  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:00.051148  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:00.149099  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:00.151881  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:00.279612  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:00.553633  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:00.651885  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:00.652559  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:00.780140  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:01.050780  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:01.152390  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:01.152824  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:01.280130  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:01.552657  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:01.587746  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:17:01.652530  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:01.657396  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:01.780120  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:02.052794  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:02.156539  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:02.159152  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:02.279663  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:17:02.460527  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:17:02.460574  141940 retry.go:31] will retry after 13.338688131s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:17:02.551399  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:02.649837  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 14:17:02.653882  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:02.779800  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:03.052193  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:03.150438  141940 kapi.go:107] duration metric: took 36.504344219s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 14:17:03.152921  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:03.279040  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:03.550495  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:03.789384  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:03.791620  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:04.051618  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:04.152953  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:04.278890  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:04.551266  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:04.652580  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:04.791711  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:05.051495  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:05.152918  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:05.278775  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:05.550336  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:05.652592  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:05.778261  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:06.052450  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:06.153386  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:06.279937  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:06.550402  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:06.653132  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:06.778852  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:07.052476  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:07.152543  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:07.280950  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:07.554156  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:07.654984  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:07.780103  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:08.108086  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:08.155402  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:08.278042  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:08.551398  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:08.656610  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:08.780233  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:09.050658  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:09.153080  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:09.279898  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:09.552847  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:09.653292  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:09.781003  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:10.051326  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:10.153392  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:10.279914  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:10.550050  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:10.653442  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:10.778222  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:11.053267  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:11.154071  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:11.280668  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:11.552967  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:11.655603  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:11.779334  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:12.209314  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:12.212284  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:12.282497  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:12.553475  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:12.654715  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:12.779345  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:13.051500  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:13.154057  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:13.279206  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:13.553726  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:13.653185  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:13.779878  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:14.051612  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:14.153941  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:14.279445  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:14.550565  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:14.654080  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:14.780625  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.053208  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:15.154445  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:15.278894  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.552868  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:15.653126  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:15.780005  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:15.800161  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:17:16.050968  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:16.152703  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:16.281568  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:16.552849  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:16.663840  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:16.779437  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:16.956492  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.156256452s)
	W1026 14:17:16.956563  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:17:16.956591  141940 retry.go:31] will retry after 23.707242897s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:17:17.052534  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:17.155247  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:17.279801  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:17.550659  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:17.653670  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:17.782406  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:18.053536  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:18.154197  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:18.283628  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:18.552676  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:18.656348  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:18.785900  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:19.049739  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:19.152716  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:19.278777  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:19.550948  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:19.653296  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:19.779696  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:20.050919  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:20.153134  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:20.279323  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:20.550757  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:20.653031  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:20.779113  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:21.050725  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:21.152732  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:21.280516  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:21.551847  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:21.657920  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.141863  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:22.144744  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:22.171648  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.280087  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:22.552928  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:22.654547  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:22.778418  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:23.051514  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:23.154477  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:23.278724  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:23.551843  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:23.653424  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:23.779985  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:24.051558  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:24.155231  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:24.279785  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:24.550884  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:24.653055  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:24.781955  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:25.051630  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:25.155570  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:25.280745  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:25.552416  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:25.654062  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:25.779681  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:26.054545  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:26.154149  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:26.281069  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:26.551528  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:26.653089  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:26.778974  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:27.051615  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:27.153107  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:27.279061  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:27.550425  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:27.652600  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:27.778131  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:28.051232  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:28.152730  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:28.278897  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:28.554003  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:28.655328  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:28.781486  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:29.055986  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:29.158404  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:29.278666  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:29.551960  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:29.655690  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:29.781134  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:30.051223  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:30.152974  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:30.278837  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:30.557037  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:30.655355  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:30.781160  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:31.051368  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:31.153066  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:31.282658  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:31.552865  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:31.652747  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:31.781488  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:32.052334  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:32.152239  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:32.278959  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:32.552437  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:32.656119  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:32.781045  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:33.051826  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:33.153349  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:33.281211  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:33.759740  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:33.760627  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:33.781080  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:34.052561  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:34.152760  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:34.278746  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:34.554928  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:34.653810  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:34.781385  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:35.053057  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:35.153871  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:35.283291  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:35.557342  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:35.653350  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:35.787402  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:36.228673  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:36.229021  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:36.327002  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:36.553906  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:36.653527  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:36.780210  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:37.052412  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:37.153390  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:37.278663  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:37.551770  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:37.656001  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:37.783865  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:38.049920  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:38.153753  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:38.279527  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:38.553254  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:38.655905  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:38.780510  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:39.051942  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:39.153710  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:39.279878  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:39.566061  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:39.661165  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:39.780075  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:40.051520  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:40.152902  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:40.279471  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:40.551932  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:40.658391  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:40.664415  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:17:40.778837  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:41.055609  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:41.153697  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:41.281861  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:41.550030  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:41.655507  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:41.711509  141940 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.047045403s)
	W1026 14:17:41.711576  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:17:41.711606  141940 retry.go:31] will retry after 34.949013766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 14:17:41.778627  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:42.051961  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:42.152764  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:42.280172  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:42.555882  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:42.654580  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:42.781726  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:43.052486  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:43.155976  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:43.279292  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:43.551195  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 14:17:43.654184  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:43.779189  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:44.051608  141940 kapi.go:107] duration metric: took 1m16.504695652s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 14:17:44.152607  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:44.278879  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:44.653057  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:44.779303  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:45.153057  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:45.280130  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:45.653146  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:45.779148  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:46.152359  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:46.280604  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:46.653414  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:46.778873  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:47.152492  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:47.278985  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:47.653388  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:47.778792  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:48.152372  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:48.279675  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:48.652993  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:48.779398  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:49.153588  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:49.278990  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:49.654104  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:49.779329  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:50.153479  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:50.278395  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:50.653167  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:50.780500  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:51.153318  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:51.280322  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:51.653189  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:51.779038  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:52.152553  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:52.279076  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:52.652954  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:52.779760  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:53.153223  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:53.279785  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:53.654229  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:53.780073  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:54.152853  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:54.278609  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:54.654152  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:54.780094  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:55.152772  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:55.279968  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:55.652866  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:55.779070  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:56.152630  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:56.278982  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:56.653029  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:56.779618  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:57.152679  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:57.279149  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:57.652987  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:57.779541  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:58.153076  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:58.279204  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:58.653744  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:58.780690  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:59.153646  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:59.278768  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:17:59.653860  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:17:59.778887  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:00.154107  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:00.279436  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:00.653590  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:00.779858  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:01.152548  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:01.279026  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:01.652986  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:01.779271  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:02.153422  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:02.278301  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:02.653599  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:02.779742  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:03.153354  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:03.279162  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:03.653535  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:03.778589  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:04.153942  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:04.279519  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:04.653349  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:04.779278  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:05.153317  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:05.280314  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:05.652714  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:05.778572  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:06.153721  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:06.280167  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:06.653307  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:06.779155  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:07.153564  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:07.279327  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:07.653583  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:07.778886  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:08.152847  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:08.279162  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:08.652936  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:08.778695  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:09.154401  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:09.279069  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:09.653466  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:09.778533  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:10.153139  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:10.279609  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:10.653675  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:10.779746  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:11.153640  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:11.279844  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:11.653273  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:11.778045  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:12.152733  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:12.279082  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:12.653113  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:12.778900  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:13.153086  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:13.279499  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:13.655862  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:13.778839  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:14.152298  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:14.278915  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:14.652716  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:14.778947  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:15.153090  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:15.279271  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:15.653284  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:15.780619  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:16.349099  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:16.349236  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:16.653420  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:16.661325  141940 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 14:18:16.779407  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:17.153679  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:17.279348  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 14:18:17.333878  141940 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 14:18:17.334032  141940 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 14:18:17.652224  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:17.779620  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:18.153713  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:18.279239  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:18.653666  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:18.778614  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:19.153714  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:19.278918  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:19.652643  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:19.779087  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:20.152848  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:20.278673  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:20.653499  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:20.778584  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:21.153585  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:21.279382  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:21.653890  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:21.779095  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:22.152741  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:22.278511  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:22.653428  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:22.778299  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:23.153415  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:23.277856  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:23.653103  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:23.779207  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:24.153037  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:24.278768  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:24.653760  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:24.779296  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:25.153208  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:25.279066  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:25.652296  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:25.779293  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:26.152835  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:26.278775  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:26.652768  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:26.778860  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:27.152996  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:27.278919  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:27.652152  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:27.778954  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:28.153026  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:28.278723  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:28.652414  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:28.778030  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:29.153575  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:29.278674  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:29.653563  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:29.778644  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:30.153298  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:30.279247  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:30.652919  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:30.778884  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:31.153314  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:31.280475  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:31.653334  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:31.778704  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:32.153221  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:32.278849  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:32.652936  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:32.779045  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:33.151995  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:33.279386  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:33.653091  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:33.779114  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:34.152581  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:34.278705  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:34.653621  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:34.779333  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:35.153606  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:35.278013  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:35.652518  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:35.778285  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:36.153222  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:36.279756  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:36.653277  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:36.779893  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:37.153120  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:37.278525  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:37.653245  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:37.779183  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:38.152280  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:38.279546  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:38.653561  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:38.778939  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:39.152882  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:39.278682  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:39.653095  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:39.779367  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:40.152531  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:40.278067  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:40.652714  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:40.779121  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:41.152865  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:41.279031  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:41.653052  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:41.778919  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:42.152893  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:42.280188  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:42.652973  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:42.778817  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:43.156313  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:43.280420  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:43.655621  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:43.784801  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:44.156517  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:44.279900  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:44.653161  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:44.781142  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:45.155703  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:45.278737  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:45.653069  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:45.780695  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:46.153250  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:46.279126  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:46.654637  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:46.780155  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:47.157947  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:47.279803  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:47.653841  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:47.780508  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:48.154639  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:48.279017  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:48.652992  141940 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 14:18:48.778771  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:49.153897  141940 kapi.go:107] duration metric: took 2m22.504846529s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 14:18:49.278403  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:49.779105  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:50.278319  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:50.783290  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:51.280923  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:51.780332  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:52.279308  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:52.780557  141940 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 14:18:53.279378  141940 kapi.go:107] duration metric: took 2m23.504067415s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 14:18:53.280877  141940 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-061252 cluster.
	I1026 14:18:53.281908  141940 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 14:18:53.282954  141940 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 14:18:53.284127  141940 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, storage-provisioner, cloud-spanner, registry-creds, ingress-dns, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1026 14:18:53.285354  141940 addons.go:514] duration metric: took 2m35.027562663s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin storage-provisioner cloud-spanner registry-creds ingress-dns default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1026 14:18:53.285395  141940 start.go:246] waiting for cluster config update ...
	I1026 14:18:53.285417  141940 start.go:255] writing updated cluster config ...
	I1026 14:18:53.285801  141940 ssh_runner.go:195] Run: rm -f paused
	I1026 14:18:53.291293  141940 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:18:53.295637  141940 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lq8mf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.301689  141940 pod_ready.go:94] pod "coredns-66bc5c9577-lq8mf" is "Ready"
	I1026 14:18:53.301710  141940 pod_ready.go:86] duration metric: took 6.053185ms for pod "coredns-66bc5c9577-lq8mf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.303418  141940 pod_ready.go:83] waiting for pod "etcd-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.307590  141940 pod_ready.go:94] pod "etcd-addons-061252" is "Ready"
	I1026 14:18:53.307616  141940 pod_ready.go:86] duration metric: took 4.175404ms for pod "etcd-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.309742  141940 pod_ready.go:83] waiting for pod "kube-apiserver-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.313520  141940 pod_ready.go:94] pod "kube-apiserver-addons-061252" is "Ready"
	I1026 14:18:53.313539  141940 pod_ready.go:86] duration metric: took 3.778801ms for pod "kube-apiserver-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.315635  141940 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.695898  141940 pod_ready.go:94] pod "kube-controller-manager-addons-061252" is "Ready"
	I1026 14:18:53.695924  141940 pod_ready.go:86] duration metric: took 380.268256ms for pod "kube-controller-manager-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:53.895378  141940 pod_ready.go:83] waiting for pod "kube-proxy-ltxkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:54.297073  141940 pod_ready.go:94] pod "kube-proxy-ltxkd" is "Ready"
	I1026 14:18:54.297100  141940 pod_ready.go:86] duration metric: took 401.692656ms for pod "kube-proxy-ltxkd" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:54.496518  141940 pod_ready.go:83] waiting for pod "kube-scheduler-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:54.894930  141940 pod_ready.go:94] pod "kube-scheduler-addons-061252" is "Ready"
	I1026 14:18:54.894959  141940 pod_ready.go:86] duration metric: took 398.412829ms for pod "kube-scheduler-addons-061252" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 14:18:54.894976  141940 pod_ready.go:40] duration metric: took 1.603655481s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 14:18:54.942569  141940 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 14:18:54.944407  141940 out.go:179] * Done! kubectl is now configured to use "addons-061252" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.133916661Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761488533133888021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0f147b1-07a1-499b-8653-1e7b608d250e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.134788398Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f754d9c-8496-44f7-908c-2f8b6d68bf67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.134862414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f754d9c-8496-44f7-908c-2f8b6d68bf67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.135230338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30da37e72d11533405346c508cb131b808640a68753133394021acbb8a124c4b,PodSandboxId:7e7e3045c04549877a5845f872c147b820437ea1ad8c91871c7c8cfd7d1c6718,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761488390797876906,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: acf79c32-b924-4f27-be81-436a760fbf38,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e510c1906b20ea8acfde0e9bedd5bfaa1152c10a820b29ce45d868213df6e2e,PodSandboxId:a62eb6c3efdcdc7a265aee55b1605e41812fdfa20bd7f588ab235ab3b545ffd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1761488378069724038,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-glnc2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f3f50f78-ed48-47ad-89b5-4a048f08fdd1,},Annotations:map[string]string{io.kubernetes.container.hash: b710
2817,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be93168338da09edf66a85509af1cab535126608e47442856ef2a8a4ac1b5ef3,PodSandboxId:aa6918747d85f7f3f1741c1200edc16fdc4e38271ea4d1aa94dc5eb64ca7e975,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761488340252100184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fe79797-3a20-4bb9-83df
-48301b29d260,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a22cb03c14053783c9c761c56f06c238a0813ca273e0342031794080b0bc5a,PodSandboxId:5cb9b875dd605b747332cb79930a43f33c42cccc3de1032af56330fa5b660476,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761488327908855578,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-sjbgn,io.kubernetes.pod.namespace: ingress-nginx,io.k
ubernetes.pod.uid: fce91f6d-2a62-4675-a237-6ae712d3a179,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d27facc2e6ef62cdde9fdabd3bdf1ddd673012d47f2b5542033355b934880301,PodSandboxId:e4928b93e95ba734d1a7b6d969487ec0c6cb56bfe42215c86f73ff90b6a54db1,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488267310626572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z89n5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c18a553e-193a-4d19-af10-370ef48d6720,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5c450886b21a68b9d2992e3b61716eb887dba21f80d80caba520af8c8559aa,PodSandboxId:1f59e65cc385eb64f152b8da348628a540129f96f78cfb83b76c448d3e1a161b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488253958032561,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s4v4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c38cd28-1594-4d03-9b27-909c3f981590,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600873e076dd9f2b14a78c04ebd05a025d3e1183e8876067119d06530f68041,PodSandboxId:e04bd1e6ac248d1f53bd8bad1bced2fb0f668c459ed7b77bea996037d2305dee,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76
d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761488238641832705,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n9d9d,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 94543f49-5882-4838-97b6-bdddbc37c91c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ae8ebb86f9f6bfab18ca2bb20d4e2d77b8d5bb1da54e61e19bae356863193,PodSandboxId:1e37b3fb65828e9ec8208ca5a0dc49cf923af9621d68643d25be65458ccb6887,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761488233244998174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b8ea46-a7ec-4896-9b65-f41a07fe5e13,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025cc5172e8603a82d939dfdeffc7d798964289f75ccbe35b6aca7a3e7b19db,PodSandboxId:f68
d53cb8010651caf04d961f25cc7e551f1c45bd0a879f33a031fa7c5ded7a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761488206944231446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-52p56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91ddf2b-867e-4c39-9243-20c81a38f82d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601cd8148fbad7ec26d38ec0e846a73af0a113788
cb55f689e67edc654c4d17f,PodSandboxId:e272b9af8a81a02e25b3fd47ee259538ae7d734c68a8aee91f739498e2ab17c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761488185165974541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb816fec-acef-4c73-bceb-27ec431327d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abde6ced7bf75f0f6d0d6ec037b61efccb9971fd52ef477a84b78
76bfe2ec167,PodSandboxId:8d7b0bdb6089949aa16ef5acf4c98561c3dc5067e73b7999c05972bcc52c5805,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761488180226885256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lq8mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a691d1-64fb-44e3-8bff-c087e6941e32,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a72cc2eaca7f6f0b2c737fbffa8eadbd759789a77f688d7ad9a7f0b7cc4723,PodSandboxId:e6cdc2496239a00e0a4f65fecbde8713bce05d704ca43366d2b9fa6300550bf1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761488179269589935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ltxkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6476cb31-99c8-4ed0-88ec-1260d6304141,},Annotations:map[string]string{io.kubern
etes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325425c6caa27c33d283060674e42fdb60477facbd935c8dc56b84e4744fa935,PodSandboxId:74d88ac8ba1e06fc5082eb8e3e036d9e3c9319c5747166d32f180cbf22578a5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761488168238715003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7860302cda6fb395f4ae56156e845d2f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80af4cfdb1ffb967c084bf38fb67a25c347a69428f243380a00746db9a7d1bde,PodSandboxId:f475f296d5aa5d44e081dab56c0a4430b6e29884788c5822fe4c1227b18fa384,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761488168232694598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-061252,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 03c5f65aaff4df8bc4ce465d8cda77e6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fa6e21515f21fb6e1f5c1a9352d1698798e4d92ef1410d64643a08373a7f8f,PodSandboxId:b0f91fe1781e8ae64dd56f92f32395d4d3b08840849bb83575bb5f8615a6d92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761488168226119420,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82976f61d9c6d83ad9b3dfef63f11439,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f715453b708a2fff83963961cb5e66eed1aa05e1745b8009633438e401c1d7,PodSandboxId:2fa9735f7a97256a1a3dbf1b2706fd95e2039d07cef87a874452e6b960326ada,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
,State:CONTAINER_RUNNING,CreatedAt:1761488168173897020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3073aacde09d10840cb3b725753c2364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f754d9c-8496-44f7-908c-2f8b6d68bf67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.174416509Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f27749e-f489-4869-a8c4-5b8aacda919c name=/runtime.v1.RuntimeService/Version
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.174497282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f27749e-f489-4869-a8c4-5b8aacda919c name=/runtime.v1.RuntimeService/Version
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.176338013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a97ce83f-a5a3-4e50-8ac7-0948f5860c73 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.177627725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761488533177602286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a97ce83f-a5a3-4e50-8ac7-0948f5860c73 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.178235120Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b12c2d0-e0f4-4932-834e-f6b1c6a9d1b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.178330144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b12c2d0-e0f4-4932-834e-f6b1c6a9d1b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.178639378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30da37e72d11533405346c508cb131b808640a68753133394021acbb8a124c4b,PodSandboxId:7e7e3045c04549877a5845f872c147b820437ea1ad8c91871c7c8cfd7d1c6718,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761488390797876906,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: acf79c32-b924-4f27-be81-436a760fbf38,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e510c1906b20ea8acfde0e9bedd5bfaa1152c10a820b29ce45d868213df6e2e,PodSandboxId:a62eb6c3efdcdc7a265aee55b1605e41812fdfa20bd7f588ab235ab3b545ffd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1761488378069724038,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-glnc2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f3f50f78-ed48-47ad-89b5-4a048f08fdd1,},Annotations:map[string]string{io.kubernetes.container.hash: b710
2817,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be93168338da09edf66a85509af1cab535126608e47442856ef2a8a4ac1b5ef3,PodSandboxId:aa6918747d85f7f3f1741c1200edc16fdc4e38271ea4d1aa94dc5eb64ca7e975,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761488340252100184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fe79797-3a20-4bb9-83df
-48301b29d260,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a22cb03c14053783c9c761c56f06c238a0813ca273e0342031794080b0bc5a,PodSandboxId:5cb9b875dd605b747332cb79930a43f33c42cccc3de1032af56330fa5b660476,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761488327908855578,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-sjbgn,io.kubernetes.pod.namespace: ingress-nginx,io.k
ubernetes.pod.uid: fce91f6d-2a62-4675-a237-6ae712d3a179,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d27facc2e6ef62cdde9fdabd3bdf1ddd673012d47f2b5542033355b934880301,PodSandboxId:e4928b93e95ba734d1a7b6d969487ec0c6cb56bfe42215c86f73ff90b6a54db1,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488267310626572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z89n5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c18a553e-193a-4d19-af10-370ef48d6720,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5c450886b21a68b9d2992e3b61716eb887dba21f80d80caba520af8c8559aa,PodSandboxId:1f59e65cc385eb64f152b8da348628a540129f96f78cfb83b76c448d3e1a161b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488253958032561,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s4v4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c38cd28-1594-4d03-9b27-909c3f981590,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600873e076dd9f2b14a78c04ebd05a025d3e1183e8876067119d06530f68041,PodSandboxId:e04bd1e6ac248d1f53bd8bad1bced2fb0f668c459ed7b77bea996037d2305dee,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76
d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761488238641832705,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n9d9d,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 94543f49-5882-4838-97b6-bdddbc37c91c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ae8ebb86f9f6bfab18ca2bb20d4e2d77b8d5bb1da54e61e19bae356863193,PodSandboxId:1e37b3fb65828e9ec8208ca5a0dc49cf923af9621d68643d25be65458ccb6887,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761488233244998174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b8ea46-a7ec-4896-9b65-f41a07fe5e13,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025cc5172e8603a82d939dfdeffc7d798964289f75ccbe35b6aca7a3e7b19db,PodSandboxId:f68
d53cb8010651caf04d961f25cc7e551f1c45bd0a879f33a031fa7c5ded7a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761488206944231446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-52p56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91ddf2b-867e-4c39-9243-20c81a38f82d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601cd8148fbad7ec26d38ec0e846a73af0a113788
cb55f689e67edc654c4d17f,PodSandboxId:e272b9af8a81a02e25b3fd47ee259538ae7d734c68a8aee91f739498e2ab17c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761488185165974541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb816fec-acef-4c73-bceb-27ec431327d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abde6ced7bf75f0f6d0d6ec037b61efccb9971fd52ef477a84b78
76bfe2ec167,PodSandboxId:8d7b0bdb6089949aa16ef5acf4c98561c3dc5067e73b7999c05972bcc52c5805,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761488180226885256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lq8mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a691d1-64fb-44e3-8bff-c087e6941e32,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a72cc2eaca7f6f0b2c737fbffa8eadbd759789a77f688d7ad9a7f0b7cc4723,PodSandboxId:e6cdc2496239a00e0a4f65fecbde8713bce05d704ca43366d2b9fa6300550bf1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761488179269589935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ltxkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6476cb31-99c8-4ed0-88ec-1260d6304141,},Annotations:map[string]string{io.kubern
etes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325425c6caa27c33d283060674e42fdb60477facbd935c8dc56b84e4744fa935,PodSandboxId:74d88ac8ba1e06fc5082eb8e3e036d9e3c9319c5747166d32f180cbf22578a5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761488168238715003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7860302cda6fb395f4ae56156e845d2f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80af4cfdb1ffb967c084bf38fb67a25c347a69428f243380a00746db9a7d1bde,PodSandboxId:f475f296d5aa5d44e081dab56c0a4430b6e29884788c5822fe4c1227b18fa384,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761488168232694598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-061252,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 03c5f65aaff4df8bc4ce465d8cda77e6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fa6e21515f21fb6e1f5c1a9352d1698798e4d92ef1410d64643a08373a7f8f,PodSandboxId:b0f91fe1781e8ae64dd56f92f32395d4d3b08840849bb83575bb5f8615a6d92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761488168226119420,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82976f61d9c6d83ad9b3dfef63f11439,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f715453b708a2fff83963961cb5e66eed1aa05e1745b8009633438e401c1d7,PodSandboxId:2fa9735f7a97256a1a3dbf1b2706fd95e2039d07cef87a874452e6b960326ada,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
,State:CONTAINER_RUNNING,CreatedAt:1761488168173897020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3073aacde09d10840cb3b725753c2364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b12c2d0-e0f4-4932-834e-f6b1c6a9d1b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.213358811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d2b9965-9e3e-4df4-9ac4-351a685e43a8 name=/runtime.v1.RuntimeService/Version
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.213425819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d2b9965-9e3e-4df4-9ac4-351a685e43a8 name=/runtime.v1.RuntimeService/Version
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.214731724Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd050ae1-11d9-4094-a229-31d50f4b92fc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.216284356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761488533216224253,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd050ae1-11d9-4094-a229-31d50f4b92fc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.216774687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e07db316-6f9b-409c-8bf1-ab7726819126 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.216824207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e07db316-6f9b-409c-8bf1-ab7726819126 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.217186972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30da37e72d11533405346c508cb131b808640a68753133394021acbb8a124c4b,PodSandboxId:7e7e3045c04549877a5845f872c147b820437ea1ad8c91871c7c8cfd7d1c6718,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761488390797876906,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: acf79c32-b924-4f27-be81-436a760fbf38,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e510c1906b20ea8acfde0e9bedd5bfaa1152c10a820b29ce45d868213df6e2e,PodSandboxId:a62eb6c3efdcdc7a265aee55b1605e41812fdfa20bd7f588ab235ab3b545ffd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1761488378069724038,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-glnc2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f3f50f78-ed48-47ad-89b5-4a048f08fdd1,},Annotations:map[string]string{io.kubernetes.container.hash: b710
2817,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be93168338da09edf66a85509af1cab535126608e47442856ef2a8a4ac1b5ef3,PodSandboxId:aa6918747d85f7f3f1741c1200edc16fdc4e38271ea4d1aa94dc5eb64ca7e975,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761488340252100184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fe79797-3a20-4bb9-83df
-48301b29d260,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a22cb03c14053783c9c761c56f06c238a0813ca273e0342031794080b0bc5a,PodSandboxId:5cb9b875dd605b747332cb79930a43f33c42cccc3de1032af56330fa5b660476,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761488327908855578,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-sjbgn,io.kubernetes.pod.namespace: ingress-nginx,io.k
ubernetes.pod.uid: fce91f6d-2a62-4675-a237-6ae712d3a179,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d27facc2e6ef62cdde9fdabd3bdf1ddd673012d47f2b5542033355b934880301,PodSandboxId:e4928b93e95ba734d1a7b6d969487ec0c6cb56bfe42215c86f73ff90b6a54db1,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488267310626572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z89n5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c18a553e-193a-4d19-af10-370ef48d6720,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5c450886b21a68b9d2992e3b61716eb887dba21f80d80caba520af8c8559aa,PodSandboxId:1f59e65cc385eb64f152b8da348628a540129f96f78cfb83b76c448d3e1a161b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488253958032561,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s4v4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c38cd28-1594-4d03-9b27-909c3f981590,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600873e076dd9f2b14a78c04ebd05a025d3e1183e8876067119d06530f68041,PodSandboxId:e04bd1e6ac248d1f53bd8bad1bced2fb0f668c459ed7b77bea996037d2305dee,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76
d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761488238641832705,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n9d9d,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 94543f49-5882-4838-97b6-bdddbc37c91c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ae8ebb86f9f6bfab18ca2bb20d4e2d77b8d5bb1da54e61e19bae356863193,PodSandboxId:1e37b3fb65828e9ec8208ca5a0dc49cf923af9621d68643d25be65458ccb6887,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761488233244998174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b8ea46-a7ec-4896-9b65-f41a07fe5e13,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025cc5172e8603a82d939dfdeffc7d798964289f75ccbe35b6aca7a3e7b19db,PodSandboxId:f68
d53cb8010651caf04d961f25cc7e551f1c45bd0a879f33a031fa7c5ded7a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761488206944231446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-52p56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91ddf2b-867e-4c39-9243-20c81a38f82d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601cd8148fbad7ec26d38ec0e846a73af0a113788
cb55f689e67edc654c4d17f,PodSandboxId:e272b9af8a81a02e25b3fd47ee259538ae7d734c68a8aee91f739498e2ab17c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761488185165974541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb816fec-acef-4c73-bceb-27ec431327d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abde6ced7bf75f0f6d0d6ec037b61efccb9971fd52ef477a84b78
76bfe2ec167,PodSandboxId:8d7b0bdb6089949aa16ef5acf4c98561c3dc5067e73b7999c05972bcc52c5805,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761488180226885256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lq8mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a691d1-64fb-44e3-8bff-c087e6941e32,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a72cc2eaca7f6f0b2c737fbffa8eadbd759789a77f688d7ad9a7f0b7cc4723,PodSandboxId:e6cdc2496239a00e0a4f65fecbde8713bce05d704ca43366d2b9fa6300550bf1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761488179269589935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ltxkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6476cb31-99c8-4ed0-88ec-1260d6304141,},Annotations:map[string]string{io.kubern
etes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325425c6caa27c33d283060674e42fdb60477facbd935c8dc56b84e4744fa935,PodSandboxId:74d88ac8ba1e06fc5082eb8e3e036d9e3c9319c5747166d32f180cbf22578a5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761488168238715003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7860302cda6fb395f4ae56156e845d2f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80af4cfdb1ffb967c084bf38fb67a25c347a69428f243380a00746db9a7d1bde,PodSandboxId:f475f296d5aa5d44e081dab56c0a4430b6e29884788c5822fe4c1227b18fa384,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761488168232694598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-061252,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 03c5f65aaff4df8bc4ce465d8cda77e6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fa6e21515f21fb6e1f5c1a9352d1698798e4d92ef1410d64643a08373a7f8f,PodSandboxId:b0f91fe1781e8ae64dd56f92f32395d4d3b08840849bb83575bb5f8615a6d92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761488168226119420,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82976f61d9c6d83ad9b3dfef63f11439,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f715453b708a2fff83963961cb5e66eed1aa05e1745b8009633438e401c1d7,PodSandboxId:2fa9735f7a97256a1a3dbf1b2706fd95e2039d07cef87a874452e6b960326ada,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
,State:CONTAINER_RUNNING,CreatedAt:1761488168173897020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3073aacde09d10840cb3b725753c2364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e07db316-6f9b-409c-8bf1-ab7726819126 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.251800203Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8e595d1-2c74-481b-bd46-ee7d1698e9b9 name=/runtime.v1.RuntimeService/Version
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.251872308Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8e595d1-2c74-481b-bd46-ee7d1698e9b9 name=/runtime.v1.RuntimeService/Version
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.252878483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e54c2f54-bb22-4f02-8170-528f34ec640d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.254302212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761488533254232236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:598025,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e54c2f54-bb22-4f02-8170-528f34ec640d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.255160900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=850ac690-8eb3-4abc-a70a-905cd23cc8ed name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.255236860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=850ac690-8eb3-4abc-a70a-905cd23cc8ed name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 14:22:13 addons-061252 crio[811]: time="2025-10-26 14:22:13.255617030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30da37e72d11533405346c508cb131b808640a68753133394021acbb8a124c4b,PodSandboxId:7e7e3045c04549877a5845f872c147b820437ea1ad8c91871c7c8cfd7d1c6718,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5,State:CONTAINER_RUNNING,CreatedAt:1761488390797876906,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: acf79c32-b924-4f27-be81-436a760fbf38,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e510c1906b20ea8acfde0e9bedd5bfaa1152c10a820b29ce45d868213df6e2e,PodSandboxId:a62eb6c3efdcdc7a265aee55b1605e41812fdfa20bd7f588ab235ab3b545ffd8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b19891abe61fd4f334e0bb4345313cac562b66561765ae851db1ef2f81ba249a,State:CONTAINER_RUNNING,CreatedAt:1761488378069724038,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-6945c6f4d-glnc2,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f3f50f78-ed48-47ad-89b5-4a048f08fdd1,},Annotations:map[string]string{io.kubernetes.container.hash: b710
2817,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be93168338da09edf66a85509af1cab535126608e47442856ef2a8a4ac1b5ef3,PodSandboxId:aa6918747d85f7f3f1741c1200edc16fdc4e38271ea4d1aa94dc5eb64ca7e975,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761488340252100184,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6fe79797-3a20-4bb9-83df
-48301b29d260,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a22cb03c14053783c9c761c56f06c238a0813ca273e0342031794080b0bc5a,PodSandboxId:5cb9b875dd605b747332cb79930a43f33c42cccc3de1032af56330fa5b660476,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761488327908855578,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-sjbgn,io.kubernetes.pod.namespace: ingress-nginx,io.k
ubernetes.pod.uid: fce91f6d-2a62-4675-a237-6ae712d3a179,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:d27facc2e6ef62cdde9fdabd3bdf1ddd673012d47f2b5542033355b934880301,PodSandboxId:e4928b93e95ba734d1a7b6d969487ec0c6cb56bfe42215c86f73ff90b6a54db1,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,Annotations:map[string]string{},UserSpecified
Image:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488267310626572,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-z89n5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c18a553e-193a-4d19-af10-370ef48d6720,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5c450886b21a68b9d2992e3b61716eb887dba21f80d80caba520af8c8559aa,PodSandboxId:1f59e65cc385eb64f152b8da348628a540129f96f78cfb83b76c448d3e1a161b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotatio
ns:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761488253958032561,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-s4v4n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7c38cd28-1594-4d03-9b27-909c3f981590,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a600873e076dd9f2b14a78c04ebd05a025d3e1183e8876067119d06530f68041,PodSandboxId:e04bd1e6ac248d1f53bd8bad1bced2fb0f668c459ed7b77bea996037d2305dee,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76
d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761488238641832705,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-n9d9d,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 94543f49-5882-4838-97b6-bdddbc37c91c,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:382ae8ebb86f9f6bfab18ca2bb20d4e2d77b8d5bb1da54e61e19bae356863193,PodSandboxId:1e37b3fb65828e9ec8208ca5a0dc49cf923af9621d68643d25be65458ccb6887,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&
ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761488233244998174,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36b8ea46-a7ec-4896-9b65-f41a07fe5e13,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e025cc5172e8603a82d939dfdeffc7d798964289f75ccbe35b6aca7a3e7b19db,PodSandboxId:f68
d53cb8010651caf04d961f25cc7e551f1c45bd0a879f33a031fa7c5ded7a4,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761488206944231446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-52p56,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d91ddf2b-867e-4c39-9243-20c81a38f82d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601cd8148fbad7ec26d38ec0e846a73af0a113788
cb55f689e67edc654c4d17f,PodSandboxId:e272b9af8a81a02e25b3fd47ee259538ae7d734c68a8aee91f739498e2ab17c5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761488185165974541,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb816fec-acef-4c73-bceb-27ec431327d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abde6ced7bf75f0f6d0d6ec037b61efccb9971fd52ef477a84b78
76bfe2ec167,PodSandboxId:8d7b0bdb6089949aa16ef5acf4c98561c3dc5067e73b7999c05972bcc52c5805,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761488180226885256,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-lq8mf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28a691d1-64fb-44e3-8bff-c087e6941e32,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"r
eadiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5a72cc2eaca7f6f0b2c737fbffa8eadbd759789a77f688d7ad9a7f0b7cc4723,PodSandboxId:e6cdc2496239a00e0a4f65fecbde8713bce05d704ca43366d2b9fa6300550bf1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761488179269589935,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ltxkd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6476cb31-99c8-4ed0-88ec-1260d6304141,},Annotations:map[string]string{io.kubern
etes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:325425c6caa27c33d283060674e42fdb60477facbd935c8dc56b84e4744fa935,PodSandboxId:74d88ac8ba1e06fc5082eb8e3e036d9e3c9319c5747166d32f180cbf22578a5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761488168238715003,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7860302cda6fb395f4ae56156e845d2f,},Annotations:map[string]string
{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80af4cfdb1ffb967c084bf38fb67a25c347a69428f243380a00746db9a7d1bde,PodSandboxId:f475f296d5aa5d44e081dab56c0a4430b6e29884788c5822fe4c1227b18fa384,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761488168232694598,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-061252,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 03c5f65aaff4df8bc4ce465d8cda77e6,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16fa6e21515f21fb6e1f5c1a9352d1698798e4d92ef1410d64643a08373a7f8f,PodSandboxId:b0f91fe1781e8ae64dd56f92f32395d4d3b08840849bb83575bb5f8615a6d92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761488168226119420,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82976f61d9c6d83ad9b3dfef63f11439,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f715453b708a2fff83963961cb5e66eed1aa05e1745b8009633438e401c1d7,PodSandboxId:2fa9735f7a97256a1a3dbf1b2706fd95e2039d07cef87a874452e6b960326ada,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
,State:CONTAINER_RUNNING,CreatedAt:1761488168173897020,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-061252,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3073aacde09d10840cb3b725753c2364,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=850ac690-8eb3-4abc-a70a-905cd23cc8ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30da37e72d115       docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22                              2 minutes ago       Running             nginx                     0                   7e7e3045c0454       nginx
	2e510c1906b20       ghcr.io/headlamp-k8s/headlamp@sha256:4fae4c1c18e77352b66e795f7d98a24f775d1e9f3ef847454e4857244ebc6c03                        2 minutes ago       Running             headlamp                  0                   a62eb6c3efdcd       headlamp-6945c6f4d-glnc2
	be93168338da0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   aa6918747d85f       busybox
	65a22cb03c140       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   5cb9b875dd605       ingress-nginx-controller-675c5ddd98-sjbgn
	d27facc2e6ef6       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                             4 minutes ago       Exited              patch                     2                   e4928b93e95ba       ingress-nginx-admission-patch-z89n5
	8d5c450886b21       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   1f59e65cc385e       ingress-nginx-admission-create-s4v4n
	a600873e076dd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   e04bd1e6ac248       gadget-n9d9d
	382ae8ebb86f9       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               5 minutes ago       Running             minikube-ingress-dns      0                   1e37b3fb65828       kube-ingress-dns-minikube
	e025cc5172e86       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     5 minutes ago       Running             amd-gpu-device-plugin     0                   f68d53cb80106       amd-gpu-device-plugin-52p56
	601cd8148fbad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   e272b9af8a81a       storage-provisioner
	abde6ced7bf75       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   8d7b0bdb60899       coredns-66bc5c9577-lq8mf
	e5a72cc2eaca7       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   e6cdc2496239a       kube-proxy-ltxkd
	325425c6caa27       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             6 minutes ago       Running             kube-controller-manager   0                   74d88ac8ba1e0       kube-controller-manager-addons-061252
	80af4cfdb1ffb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             6 minutes ago       Running             kube-apiserver            0                   f475f296d5aa5       kube-apiserver-addons-061252
	16fa6e21515f2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             6 minutes ago       Running             kube-scheduler            0                   b0f91fe1781e8       kube-scheduler-addons-061252
	14f715453b708       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             6 minutes ago       Running             etcd                      0                   2fa9735f7a972       etcd-addons-061252
	
	
	==> coredns [abde6ced7bf75f0f6d0d6ec037b61efccb9971fd52ef477a84b7876bfe2ec167] <==
	[INFO] 10.244.0.8:60761 - 63319 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000095959s
	[INFO] 10.244.0.8:60761 - 16119 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000125453s
	[INFO] 10.244.0.8:60761 - 54732 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000213401s
	[INFO] 10.244.0.8:60761 - 8953 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000115458s
	[INFO] 10.244.0.8:60761 - 61865 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.001358602s
	[INFO] 10.244.0.8:60761 - 37230 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000130972s
	[INFO] 10.244.0.8:60761 - 52268 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000193178s
	[INFO] 10.244.0.8:40141 - 50529 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000175473s
	[INFO] 10.244.0.8:40141 - 50822 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154351s
	[INFO] 10.244.0.8:43918 - 28052 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102064s
	[INFO] 10.244.0.8:43918 - 28293 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000117306s
	[INFO] 10.244.0.8:39241 - 51526 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000076937s
	[INFO] 10.244.0.8:39241 - 51289 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000091868s
	[INFO] 10.244.0.8:53614 - 60080 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154907s
	[INFO] 10.244.0.8:53614 - 60292 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133672s
	[INFO] 10.244.0.23:44217 - 51606 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000334022s
	[INFO] 10.244.0.23:55297 - 18062 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000142129s
	[INFO] 10.244.0.23:43463 - 1916 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120138s
	[INFO] 10.244.0.23:35903 - 41303 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110648s
	[INFO] 10.244.0.23:40617 - 9352 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077211s
	[INFO] 10.244.0.23:52785 - 1722 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000201778s
	[INFO] 10.244.0.23:43531 - 10543 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003724612s
	[INFO] 10.244.0.23:40335 - 50244 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.003795553s
	[INFO] 10.244.0.27:41791 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000217613s
	[INFO] 10.244.0.27:49165 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144975s
	
	
	==> describe nodes <==
	Name:               addons-061252
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-061252
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=addons-061252
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T14_16_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-061252
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 14:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-061252
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 14:22:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 14:20:18 +0000   Sun, 26 Oct 2025 14:16:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 14:20:18 +0000   Sun, 26 Oct 2025 14:16:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 14:20:18 +0000   Sun, 26 Oct 2025 14:16:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 14:20:18 +0000   Sun, 26 Oct 2025 14:16:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    addons-061252
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4008588Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2deb9cc073c4b228e7b49ff4094ceba
	  System UUID:                f2deb9cc-073c-4b22-8e7b-49ff4094ceba
	  Boot ID:                    8fe57cc0-7305-4756-a4ff-8ab401d71782
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  default                     hello-world-app-5d498dc89-d7g99              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gadget                      gadget-n9d9d                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  headlamp                    headlamp-6945c6f4d-glnc2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m44s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-sjbgn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m47s
	  kube-system                 amd-gpu-device-plugin-52p56                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 coredns-66bc5c9577-lq8mf                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m54s
	  kube-system                 etcd-addons-061252                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         6m
	  kube-system                 kube-apiserver-addons-061252                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-controller-manager-addons-061252        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 kube-proxy-ltxkd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-scheduler-addons-061252                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet          Node addons-061252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet          Node addons-061252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)  kubelet          Node addons-061252 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m                   kubelet          Node addons-061252 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m                   kubelet          Node addons-061252 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m                   kubelet          Node addons-061252 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m59s                kubelet          Node addons-061252 status is now: NodeReady
	  Normal  RegisteredNode           5m56s                node-controller  Node addons-061252 event: Registered Node addons-061252 in Controller
	
	
	==> dmesg <==
	[  +9.589489] kauditd_printk_skb: 20 callbacks suppressed
	[Oct26 14:17] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.220283] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.382640] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.509373] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.481775] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.231374] kauditd_printk_skb: 99 callbacks suppressed
	[  +4.146173] kauditd_printk_skb: 113 callbacks suppressed
	[  +5.567337] kauditd_printk_skb: 20 callbacks suppressed
	[Oct26 14:18] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.608737] kauditd_printk_skb: 68 callbacks suppressed
	[Oct26 14:19] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.648337] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.000783] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.994100] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.000305] kauditd_printk_skb: 95 callbacks suppressed
	[  +1.165859] kauditd_printk_skb: 198 callbacks suppressed
	[  +3.064356] kauditd_printk_skb: 82 callbacks suppressed
	[  +2.955329] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.233552] kauditd_printk_skb: 89 callbacks suppressed
	[Oct26 14:20] kauditd_printk_skb: 11 callbacks suppressed
	[  +0.000793] kauditd_printk_skb: 30 callbacks suppressed
	[  +8.812205] kauditd_printk_skb: 41 callbacks suppressed
	[Oct26 14:22] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [14f715453b708a2fff83963961cb5e66eed1aa05e1745b8009633438e401c1d7] <==
	{"level":"warn","ts":"2025-10-26T14:18:16.341786Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.888003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:18:16.342708Z","caller":"traceutil/trace.go:172","msg":"trace[1712854478] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1241; }","duration":"194.848406ms","start":"2025-10-26T14:18:16.147848Z","end":"2025-10-26T14:18:16.342696Z","steps":["trace[1712854478] 'agreement among raft nodes before linearized reading'  (duration: 193.851562ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:18:16.342066Z","caller":"traceutil/trace.go:172","msg":"trace[1004637550] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"229.951601ms","start":"2025-10-26T14:18:16.112104Z","end":"2025-10-26T14:18:16.342056Z","steps":["trace[1004637550] 'process raft request'  (duration: 229.835826ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:18:56.711139Z","caller":"traceutil/trace.go:172","msg":"trace[942498422] transaction","detail":"{read_only:false; response_revision:1335; number_of_response:1; }","duration":"167.15526ms","start":"2025-10-26T14:18:56.543970Z","end":"2025-10-26T14:18:56.711125Z","steps":["trace[942498422] 'process raft request'  (duration: 167.040653ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:19:37.954814Z","caller":"traceutil/trace.go:172","msg":"trace[892274098] linearizableReadLoop","detail":"{readStateIndex:1687; appliedIndex:1687; }","duration":"255.304781ms","start":"2025-10-26T14:19:37.699489Z","end":"2025-10-26T14:19:37.954794Z","steps":["trace[892274098] 'read index received'  (duration: 255.29641ms)","trace[892274098] 'applied index is now lower than readState.Index'  (duration: 7.348µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:19:37.954962Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.454979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-6b586f9694-cbv4c\" limit:1 ","response":"range_response_count:1 size:3768"}
	{"level":"info","ts":"2025-10-26T14:19:37.955049Z","caller":"traceutil/trace.go:172","msg":"trace[1449234038] range","detail":"{range_begin:/registry/pods/kube-system/registry-6b586f9694-cbv4c; range_end:; response_count:1; response_revision:1620; }","duration":"255.556609ms","start":"2025-10-26T14:19:37.699485Z","end":"2025-10-26T14:19:37.955042Z","steps":["trace[1449234038] 'agreement among raft nodes before linearized reading'  (duration: 255.406596ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:19:37.955483Z","caller":"traceutil/trace.go:172","msg":"trace[154181630] transaction","detail":"{read_only:false; response_revision:1621; number_of_response:1; }","duration":"423.670262ms","start":"2025-10-26T14:19:37.531805Z","end":"2025-10-26T14:19:37.955476Z","steps":["trace[154181630] 'process raft request'  (duration: 423.581587ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:19:37.955580Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T14:19:37.531786Z","time spent":"423.74338ms","remote":"127.0.0.1:53420","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1617 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-10-26T14:19:37.957516Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.072921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:19:37.959569Z","caller":"traceutil/trace.go:172","msg":"trace[1887482963] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1621; }","duration":"206.130428ms","start":"2025-10-26T14:19:37.753429Z","end":"2025-10-26T14:19:37.959560Z","steps":["trace[1887482963] 'agreement among raft nodes before linearized reading'  (duration: 204.048179ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:19:37.959363Z","caller":"traceutil/trace.go:172","msg":"trace[507456970] transaction","detail":"{read_only:false; response_revision:1622; number_of_response:1; }","duration":"310.913787ms","start":"2025-10-26T14:19:37.648435Z","end":"2025-10-26T14:19:37.959349Z","steps":["trace[507456970] 'process raft request'  (duration: 310.849027ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:19:37.960223Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T14:19:37.648416Z","time spent":"311.76552ms","remote":"127.0.0.1:53590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-061252\" mod_revision:1507 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-061252\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-061252\" > >"}
	{"level":"info","ts":"2025-10-26T14:19:55.223783Z","caller":"traceutil/trace.go:172","msg":"trace[1645018715] linearizableReadLoop","detail":"{readStateIndex:1804; appliedIndex:1804; }","duration":"188.664069ms","start":"2025-10-26T14:19:55.035098Z","end":"2025-10-26T14:19:55.223763Z","steps":["trace[1645018715] 'read index received'  (duration: 188.658095ms)","trace[1645018715] 'applied index is now lower than readState.Index'  (duration: 4.922µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T14:19:55.223984Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"188.870442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:19:55.224021Z","caller":"traceutil/trace.go:172","msg":"trace[64377816] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1728; }","duration":"188.923077ms","start":"2025-10-26T14:19:55.035090Z","end":"2025-10-26T14:19:55.224013Z","steps":["trace[64377816] 'agreement among raft nodes before linearized reading'  (duration: 188.83791ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:19:55.226074Z","caller":"traceutil/trace.go:172","msg":"trace[36634706] transaction","detail":"{read_only:false; response_revision:1729; number_of_response:1; }","duration":"361.42224ms","start":"2025-10-26T14:19:54.864640Z","end":"2025-10-26T14:19:55.226062Z","steps":["trace[36634706] 'process raft request'  (duration: 360.39751ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:19:55.227456Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T14:19:54.864623Z","time spent":"361.792735ms","remote":"127.0.0.1:53590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1713 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-10-26T14:19:55.266212Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"215.096438ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:19:55.266341Z","caller":"traceutil/trace.go:172","msg":"trace[1057480667] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1729; }","duration":"215.235489ms","start":"2025-10-26T14:19:55.051095Z","end":"2025-10-26T14:19:55.266330Z","steps":["trace[1057480667] 'agreement among raft nodes before linearized reading'  (duration: 215.069029ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:19:55.266446Z","caller":"traceutil/trace.go:172","msg":"trace[1084205608] transaction","detail":"{read_only:false; response_revision:1730; number_of_response:1; }","duration":"206.934925ms","start":"2025-10-26T14:19:55.059501Z","end":"2025-10-26T14:19:55.266436Z","steps":["trace[1084205608] 'process raft request'  (duration: 206.810557ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:19:55.266522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.727606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T14:19:55.266539Z","caller":"traceutil/trace.go:172","msg":"trace[461996858] range","detail":"{range_begin:/registry/roles; range_end:; response_count:0; response_revision:1730; }","duration":"198.750564ms","start":"2025-10-26T14:19:55.067783Z","end":"2025-10-26T14:19:55.266534Z","steps":["trace[461996858] 'agreement among raft nodes before linearized reading'  (duration: 198.705504ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T14:20:22.267894Z","caller":"traceutil/trace.go:172","msg":"trace[1199788454] transaction","detail":"{read_only:false; response_revision:1836; number_of_response:1; }","duration":"368.504016ms","start":"2025-10-26T14:20:21.899377Z","end":"2025-10-26T14:20:22.267881Z","steps":["trace[1199788454] 'process raft request'  (duration: 368.392033ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T14:20:22.268369Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T14:20:21.899360Z","time spent":"368.732869ms","remote":"127.0.0.1:53590","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1825 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	
	
	==> kernel <==
	 14:22:13 up 6 min,  0 users,  load average: 0.61, 1.17, 0.66
	Linux addons-061252 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [80af4cfdb1ffb967c084bf38fb67a25c347a69428f243380a00746db9a7d1bde] <==
	 > logger="UnhandledError"
	I1026 14:17:05.782845       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 14:17:06.102158       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 14:19:07.736602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.34:8443->192.168.39.1:36924: use of closed network connection
	E1026 14:19:07.939509       1 conn.go:339] Error on socket receive: read tcp 192.168.39.34:8443->192.168.39.1:36942: use of closed network connection
	I1026 14:19:29.113792       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.232.2"}
	E1026 14:19:45.434099       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1026 14:19:45.836230       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 14:19:46.043927       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.13.200"}
	I1026 14:20:03.101220       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 14:20:07.124968       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1026 14:20:25.995999       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 14:20:25.996141       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 14:20:26.037042       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 14:20:26.037195       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 14:20:26.038697       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 14:20:26.038745       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 14:20:26.051376       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 14:20:26.051404       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1026 14:20:26.067044       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1026 14:20:26.067119       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1026 14:20:27.039167       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1026 14:20:27.068842       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1026 14:20:27.095836       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1026 14:22:12.110914       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.158.252"}
	
	
	==> kube-controller-manager [325425c6caa27c33d283060674e42fdb60477facbd935c8dc56b84e4744fa935] <==
	E1026 14:20:36.875339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:20:43.523173       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:20:43.524183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:20:44.210864       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:20:44.211848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:20:48.277200       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:20:48.278182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1026 14:20:48.969841       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 14:20:48.969884       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 14:20:49.044544       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 14:20:49.044652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1026 14:21:02.234602       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:21:02.235588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:21:04.361215       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:21:04.362169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:21:07.257099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:21:07.257991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:21:34.481530       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:21:34.482343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:21:36.257885       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:21:36.258891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:21:44.282990       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:21:44.284011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1026 14:22:09.895225       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1026 14:22:09.896300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [e5a72cc2eaca7f6f0b2c737fbffa8eadbd759789a77f688d7ad9a7f0b7cc4723] <==
	I1026 14:16:19.677828       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 14:16:19.778329       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 14:16:19.778444       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.34"]
	E1026 14:16:19.778637       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 14:16:19.867392       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 14:16:19.867453       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 14:16:19.867477       1 server_linux.go:132] "Using iptables Proxier"
	I1026 14:16:19.887507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 14:16:19.905629       1 server.go:527] "Version info" version="v1.34.1"
	I1026 14:16:19.905647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 14:16:19.918456       1 config.go:200] "Starting service config controller"
	I1026 14:16:19.918483       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 14:16:19.918650       1 config.go:106] "Starting endpoint slice config controller"
	I1026 14:16:19.918681       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 14:16:19.918697       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 14:16:19.918701       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 14:16:19.925138       1 config.go:309] "Starting node config controller"
	I1026 14:16:19.925333       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 14:16:19.925343       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 14:16:20.019474       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 14:16:20.019544       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 14:16:20.021410       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [16fa6e21515f21fb6e1f5c1a9352d1698798e4d92ef1410d64643a08373a7f8f] <==
	E1026 14:16:10.866118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:16:10.866210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:16:10.867499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:16:10.867565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:16:10.867631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:16:10.867687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 14:16:10.867771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 14:16:10.867905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 14:16:10.868340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:16:10.866284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 14:16:10.868475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:16:11.689850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 14:16:11.701220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 14:16:11.730302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 14:16:11.845220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 14:16:11.855583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 14:16:11.913234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 14:16:11.932147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 14:16:11.972707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 14:16:11.982193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 14:16:12.007734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 14:16:12.009302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 14:16:12.034600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 14:16:12.054310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1026 14:16:14.635395       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 14:20:43 addons-061252 kubelet[1492]: I1026 14:20:43.288932    1492 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-lq8mf" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:20:43 addons-061252 kubelet[1492]: E1026 14:20:43.553681    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488443553178071  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:20:43 addons-061252 kubelet[1492]: E1026 14:20:43.553849    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488443553178071  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:20:53 addons-061252 kubelet[1492]: E1026 14:20:53.555821    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488453555425771  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:20:53 addons-061252 kubelet[1492]: E1026 14:20:53.556487    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488453555425771  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:03 addons-061252 kubelet[1492]: E1026 14:21:03.558995    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488463558489404  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:03 addons-061252 kubelet[1492]: E1026 14:21:03.559046    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488463558489404  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:13 addons-061252 kubelet[1492]: E1026 14:21:13.561930    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488473561635001  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:13 addons-061252 kubelet[1492]: E1026 14:21:13.561951    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488473561635001  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:23 addons-061252 kubelet[1492]: E1026 14:21:23.564560    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488483564131937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:23 addons-061252 kubelet[1492]: E1026 14:21:23.564582    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488483564131937  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:33 addons-061252 kubelet[1492]: E1026 14:21:33.567649    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488493566686265  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:33 addons-061252 kubelet[1492]: E1026 14:21:33.567689    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488493566686265  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:34 addons-061252 kubelet[1492]: I1026 14:21:34.287812    1492 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-52p56" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:42 addons-061252 kubelet[1492]: I1026 14:21:42.288063    1492 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:21:43 addons-061252 kubelet[1492]: E1026 14:21:43.570756    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488503570491244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:43 addons-061252 kubelet[1492]: E1026 14:21:43.570778    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488503570491244  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:53 addons-061252 kubelet[1492]: E1026 14:21:53.574616    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488513574320700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:21:53 addons-061252 kubelet[1492]: E1026 14:21:53.574636    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488513574320700  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:22:03 addons-061252 kubelet[1492]: E1026 14:22:03.576652    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488523576175385  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:22:03 addons-061252 kubelet[1492]: E1026 14:22:03.576674    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488523576175385  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:22:12 addons-061252 kubelet[1492]: I1026 14:22:12.182907    1492 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gp9\" (UniqueName: \"kubernetes.io/projected/e2d65cd6-e8d7-475d-8dc3-c99d7b92b9bf-kube-api-access-w6gp9\") pod \"hello-world-app-5d498dc89-d7g99\" (UID: \"e2d65cd6-e8d7-475d-8dc3-c99d7b92b9bf\") " pod="default/hello-world-app-5d498dc89-d7g99"
	Oct 26 14:22:12 addons-061252 kubelet[1492]: I1026 14:22:12.287612    1492 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-lq8mf" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 14:22:13 addons-061252 kubelet[1492]: E1026 14:22:13.579410    1492 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761488533578840343  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	Oct 26 14:22:13 addons-061252 kubelet[1492]: E1026 14:22:13.579447    1492 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761488533578840343  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:598025}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [601cd8148fbad7ec26d38ec0e846a73af0a113788cb55f689e67edc654c4d17f] <==
	W1026 14:21:48.738882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:50.742341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:50.755395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:52.758819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:52.763132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:54.765799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:54.772197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:56.774868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:56.779578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:58.782753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:21:58.790446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:00.794125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:00.799209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:02.802949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:02.811165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:04.814638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:04.819876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:06.822989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:06.827777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:08.830391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:08.835165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:10.838753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:10.843672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:12.848026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 14:22:12.852904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-061252 -n addons-061252
helpers_test.go:269: (dbg) Run:  kubectl --context addons-061252 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-d7g99 ingress-nginx-admission-create-s4v4n ingress-nginx-admission-patch-z89n5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-061252 describe pod hello-world-app-5d498dc89-d7g99 ingress-nginx-admission-create-s4v4n ingress-nginx-admission-patch-z89n5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-061252 describe pod hello-world-app-5d498dc89-d7g99 ingress-nginx-admission-create-s4v4n ingress-nginx-admission-patch-z89n5: exit status 1 (68.248945ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-d7g99
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-061252/192.168.39.34
	Start Time:       Sun, 26 Oct 2025 14:22:12 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6gp9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w6gp9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-d7g99 to addons-061252
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-s4v4n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-z89n5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-061252 describe pod hello-world-app-5d498dc89-d7g99 ingress-nginx-admission-create-s4v4n ingress-nginx-admission-patch-z89n5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-061252 addons disable ingress-dns --alsologtostderr -v=1: (1.761237202s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-061252 addons disable ingress --alsologtostderr -v=1: (7.701200349s)
--- FAIL: TestAddons/parallel/Ingress (158.20s)

                                                
                                    
x
+
TestPreload (131.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-195073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1026 15:04:43.946154  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-195073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.991627392s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-195073 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-195073 image pull gcr.io/k8s-minikube/busybox: (3.43865745s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-195073
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-195073: (6.893329039s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-195073 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-195073 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (54.908242805s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-195073 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-10-26 15:06:04.200471934 +0000 UTC m=+3070.793903377
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-195073 -n test-preload-195073
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-195073 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-195073 logs -n 25: (1.004003857s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-578731 ssh -n multinode-578731-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:53 UTC │
	│ ssh     │ multinode-578731 ssh -n multinode-578731 sudo cat /home/docker/cp-test_multinode-578731-m03_multinode-578731.txt                                          │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:53 UTC │
	│ cp      │ multinode-578731 cp multinode-578731-m03:/home/docker/cp-test.txt multinode-578731-m02:/home/docker/cp-test_multinode-578731-m03_multinode-578731-m02.txt │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:53 UTC │
	│ ssh     │ multinode-578731 ssh -n multinode-578731-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:53 UTC │
	│ ssh     │ multinode-578731 ssh -n multinode-578731-m02 sudo cat /home/docker/cp-test_multinode-578731-m03_multinode-578731-m02.txt                                  │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:53 UTC │
	│ node    │ multinode-578731 node stop m03                                                                                                                            │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:53 UTC │
	│ node    │ multinode-578731 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:53 UTC │ 26 Oct 25 14:54 UTC │
	│ node    │ list -p multinode-578731                                                                                                                                  │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:54 UTC │                     │
	│ stop    │ -p multinode-578731                                                                                                                                       │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:54 UTC │ 26 Oct 25 14:57 UTC │
	│ start   │ -p multinode-578731 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:57 UTC │ 26 Oct 25 14:59 UTC │
	│ node    │ list -p multinode-578731                                                                                                                                  │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:59 UTC │                     │
	│ node    │ multinode-578731 node delete m03                                                                                                                          │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:59 UTC │ 26 Oct 25 14:59 UTC │
	│ stop    │ multinode-578731 stop                                                                                                                                     │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 14:59 UTC │ 26 Oct 25 15:01 UTC │
	│ start   │ -p multinode-578731 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 15:01 UTC │ 26 Oct 25 15:03 UTC │
	│ node    │ list -p multinode-578731                                                                                                                                  │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │                     │
	│ start   │ -p multinode-578731-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-578731-m02 │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │                     │
	│ start   │ -p multinode-578731-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-578731-m03 │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │ 26 Oct 25 15:03 UTC │
	│ node    │ add -p multinode-578731                                                                                                                                   │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │                     │
	│ delete  │ -p multinode-578731-m03                                                                                                                                   │ multinode-578731-m03 │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │ 26 Oct 25 15:03 UTC │
	│ delete  │ -p multinode-578731                                                                                                                                       │ multinode-578731     │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │ 26 Oct 25 15:03 UTC │
	│ start   │ -p test-preload-195073 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-195073  │ jenkins │ v1.37.0 │ 26 Oct 25 15:03 UTC │ 26 Oct 25 15:04 UTC │
	│ image   │ test-preload-195073 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-195073  │ jenkins │ v1.37.0 │ 26 Oct 25 15:04 UTC │ 26 Oct 25 15:05 UTC │
	│ stop    │ -p test-preload-195073                                                                                                                                    │ test-preload-195073  │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │ 26 Oct 25 15:05 UTC │
	│ start   │ -p test-preload-195073 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-195073  │ jenkins │ v1.37.0 │ 26 Oct 25 15:05 UTC │ 26 Oct 25 15:06 UTC │
	│ image   │ test-preload-195073 image list                                                                                                                            │ test-preload-195073  │ jenkins │ v1.37.0 │ 26 Oct 25 15:06 UTC │ 26 Oct 25 15:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:05:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:05:09.157842  163849 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:05:09.158123  163849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:05:09.158133  163849 out.go:374] Setting ErrFile to fd 2...
	I1026 15:05:09.158137  163849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:05:09.158324  163849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:05:09.158795  163849 out.go:368] Setting JSON to false
	I1026 15:05:09.159671  163849 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6443,"bootTime":1761484666,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:05:09.159769  163849 start.go:141] virtualization: kvm guest
	I1026 15:05:09.161719  163849 out.go:179] * [test-preload-195073] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:05:09.162855  163849 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:05:09.162873  163849 notify.go:220] Checking for updates...
	I1026 15:05:09.165127  163849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:05:09.166098  163849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:05:09.167203  163849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:05:09.168189  163849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:05:09.169445  163849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:05:09.170918  163849 config.go:182] Loaded profile config "test-preload-195073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1026 15:05:09.172413  163849 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 15:05:09.173386  163849 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:05:09.208393  163849 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 15:05:09.209464  163849 start.go:305] selected driver: kvm2
	I1026 15:05:09.209489  163849 start.go:925] validating driver "kvm2" against &{Name:test-preload-195073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-195073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:05:09.209625  163849 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:05:09.210677  163849 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:05:09.210719  163849 cni.go:84] Creating CNI manager for ""
	I1026 15:05:09.210808  163849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:05:09.210893  163849 start.go:349] cluster config:
	{Name:test-preload-195073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-195073 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:05:09.211096  163849 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:05:09.212411  163849 out.go:179] * Starting "test-preload-195073" primary control-plane node in "test-preload-195073" cluster
	I1026 15:05:09.213312  163849 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1026 15:05:09.391280  163849 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1026 15:05:09.391309  163849 cache.go:58] Caching tarball of preloaded images
	I1026 15:05:09.391513  163849 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1026 15:05:09.393268  163849 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1026 15:05:09.394268  163849 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1026 15:05:09.509491  163849 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1026 15:05:09.509549  163849 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1026 15:05:21.384607  163849 cache.go:61] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1026 15:05:21.384763  163849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/config.json ...
	I1026 15:05:21.385017  163849 start.go:360] acquireMachinesLock for test-preload-195073: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:05:21.385089  163849 start.go:364] duration metric: took 47.428µs to acquireMachinesLock for "test-preload-195073"
	I1026 15:05:21.385107  163849 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:05:21.385112  163849 fix.go:54] fixHost starting: 
	I1026 15:05:21.387100  163849 fix.go:112] recreateIfNeeded on test-preload-195073: state=Stopped err=<nil>
	W1026 15:05:21.387129  163849 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:05:21.388910  163849 out.go:252] * Restarting existing kvm2 VM for "test-preload-195073" ...
	I1026 15:05:21.388949  163849 main.go:141] libmachine: starting domain...
	I1026 15:05:21.388959  163849 main.go:141] libmachine: ensuring networks are active...
	I1026 15:05:21.389761  163849 main.go:141] libmachine: Ensuring network default is active
	I1026 15:05:21.390358  163849 main.go:141] libmachine: Ensuring network mk-test-preload-195073 is active
	I1026 15:05:21.390876  163849 main.go:141] libmachine: getting domain XML...
	I1026 15:05:21.392097  163849 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-195073</name>
	  <uuid>d438be91-1d69-42bf-b845-7d06daa82dbc</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/test-preload-195073.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:6a:cc:9f'/>
	      <source network='mk-test-preload-195073'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:92:e5:71'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:05:22.681398  163849 main.go:141] libmachine: waiting for domain to start...
	I1026 15:05:22.682857  163849 main.go:141] libmachine: domain is now running
	I1026 15:05:22.682876  163849 main.go:141] libmachine: waiting for IP...
	I1026 15:05:22.683681  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:22.684178  163849 main.go:141] libmachine: domain test-preload-195073 has current primary IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:22.684193  163849 main.go:141] libmachine: found domain IP: 192.168.39.157
	I1026 15:05:22.684198  163849 main.go:141] libmachine: reserving static IP address...
	I1026 15:05:22.684685  163849 main.go:141] libmachine: found host DHCP lease matching {name: "test-preload-195073", mac: "52:54:00:6a:cc:9f", ip: "192.168.39.157"} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:04:10 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:22.684716  163849 main.go:141] libmachine: skip adding static IP to network mk-test-preload-195073 - found existing host DHCP lease matching {name: "test-preload-195073", mac: "52:54:00:6a:cc:9f", ip: "192.168.39.157"}
	I1026 15:05:22.684728  163849 main.go:141] libmachine: reserved static IP address 192.168.39.157 for domain test-preload-195073
	I1026 15:05:22.684747  163849 main.go:141] libmachine: waiting for SSH...
	I1026 15:05:22.684756  163849 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:05:22.686938  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:22.687301  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:04:10 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:22.687326  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:22.687506  163849 main.go:141] libmachine: Using SSH client type: native
	I1026 15:05:22.687724  163849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I1026 15:05:22.687734  163849 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:05:25.773790  163849 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.157:22: connect: no route to host
	I1026 15:05:31.853772  163849 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.157:22: connect: no route to host
	I1026 15:05:34.968147  163849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:05:34.971487  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:34.971943  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:34.971970  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:34.972185  163849 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/config.json ...
	I1026 15:05:34.972399  163849 machine.go:93] provisionDockerMachine start ...
	I1026 15:05:34.974652  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:34.974961  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:34.974987  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:34.975134  163849 main.go:141] libmachine: Using SSH client type: native
	I1026 15:05:34.975408  163849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I1026 15:05:34.975423  163849 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:05:35.089745  163849 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:05:35.089795  163849 buildroot.go:166] provisioning hostname "test-preload-195073"
	I1026 15:05:35.092731  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.093145  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:35.093181  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.093383  163849 main.go:141] libmachine: Using SSH client type: native
	I1026 15:05:35.093598  163849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I1026 15:05:35.093610  163849 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-195073 && echo "test-preload-195073" | sudo tee /etc/hostname
	I1026 15:05:35.222917  163849 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-195073
	
	I1026 15:05:35.226090  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.227286  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:35.227320  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.227554  163849 main.go:141] libmachine: Using SSH client type: native
	I1026 15:05:35.227767  163849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I1026 15:05:35.227802  163849 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-195073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-195073/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-195073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:05:35.350122  163849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:05:35.350176  163849 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:05:35.350231  163849 buildroot.go:174] setting up certificates
	I1026 15:05:35.350242  163849 provision.go:84] configureAuth start
	I1026 15:05:35.353200  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.353593  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:35.353647  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.355852  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.356196  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:35.356219  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.356365  163849 provision.go:143] copyHostCerts
	I1026 15:05:35.356427  163849 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:05:35.356443  163849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:05:35.356526  163849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:05:35.356648  163849 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:05:35.356658  163849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:05:35.356686  163849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:05:35.356749  163849 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:05:35.356759  163849 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:05:35.356782  163849 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:05:35.356834  163849 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.test-preload-195073 san=[127.0.0.1 192.168.39.157 localhost minikube test-preload-195073]
	I1026 15:05:35.966956  163849 provision.go:177] copyRemoteCerts
	I1026 15:05:35.967023  163849 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:05:35.969535  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.969931  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:35.969956  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:35.970119  163849 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/id_rsa Username:docker}
	I1026 15:05:36.057383  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:05:36.086147  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:05:36.115702  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:05:36.146080  163849 provision.go:87] duration metric: took 795.822957ms to configureAuth
	I1026 15:05:36.146112  163849 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:05:36.146344  163849 config.go:182] Loaded profile config "test-preload-195073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1026 15:05:36.149502  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.149958  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:36.149993  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.150218  163849 main.go:141] libmachine: Using SSH client type: native
	I1026 15:05:36.150466  163849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I1026 15:05:36.150487  163849 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:05:36.404119  163849 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:05:36.404149  163849 machine.go:96] duration metric: took 1.4317374s to provisionDockerMachine
	I1026 15:05:36.404163  163849 start.go:293] postStartSetup for "test-preload-195073" (driver="kvm2")
	I1026 15:05:36.404174  163849 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:05:36.404252  163849 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:05:36.407056  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.407473  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:36.407499  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.407620  163849 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/id_rsa Username:docker}
	I1026 15:05:36.495377  163849 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:05:36.500232  163849 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:05:36.500263  163849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:05:36.500354  163849 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:05:36.500452  163849 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:05:36.500598  163849 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:05:36.512646  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:05:36.541564  163849 start.go:296] duration metric: took 137.38255ms for postStartSetup
	I1026 15:05:36.541618  163849 fix.go:56] duration metric: took 15.156504718s for fixHost
	I1026 15:05:36.544209  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.544684  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:36.544719  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.544876  163849 main.go:141] libmachine: Using SSH client type: native
	I1026 15:05:36.545101  163849 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.157 22 <nil> <nil>}
	I1026 15:05:36.545114  163849 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:05:36.659652  163849 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761491136.616321309
	
	I1026 15:05:36.659682  163849 fix.go:216] guest clock: 1761491136.616321309
	I1026 15:05:36.659690  163849 fix.go:229] Guest: 2025-10-26 15:05:36.616321309 +0000 UTC Remote: 2025-10-26 15:05:36.541624091 +0000 UTC m=+27.433479916 (delta=74.697218ms)
	I1026 15:05:36.659709  163849 fix.go:200] guest clock delta is within tolerance: 74.697218ms
	I1026 15:05:36.659715  163849 start.go:83] releasing machines lock for "test-preload-195073", held for 15.274613934s
	I1026 15:05:36.662495  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.662931  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:36.662961  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.663545  163849 ssh_runner.go:195] Run: cat /version.json
	I1026 15:05:36.663643  163849 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:05:36.666672  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.666822  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.667105  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:36.667131  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.667206  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:36.667241  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:36.667276  163849 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/id_rsa Username:docker}
	I1026 15:05:36.667521  163849 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/id_rsa Username:docker}
	I1026 15:05:36.750986  163849 ssh_runner.go:195] Run: systemctl --version
	I1026 15:05:36.777913  163849 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:05:36.924028  163849 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:05:36.931552  163849 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:05:36.931626  163849 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:05:36.950754  163849 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:05:36.950785  163849 start.go:495] detecting cgroup driver to use...
	I1026 15:05:36.950861  163849 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:05:36.969853  163849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:05:36.987062  163849 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:05:36.987173  163849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:05:37.004377  163849 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:05:37.021072  163849 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:05:37.164234  163849 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:05:37.381576  163849 docker.go:234] disabling docker service ...
	I1026 15:05:37.381650  163849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:05:37.398061  163849 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:05:37.412275  163849 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:05:37.562373  163849 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:05:37.709052  163849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:05:37.725274  163849 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:05:37.749617  163849 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1026 15:05:37.749708  163849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.763422  163849 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:05:37.763503  163849 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.777411  163849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.791602  163849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.804988  163849 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:05:37.819253  163849 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.832776  163849 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.853401  163849 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:05:37.866441  163849 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:05:37.877010  163849 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:05:37.877081  163849 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:05:37.896823  163849 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:05:37.908475  163849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:05:38.043810  163849 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:05:38.148596  163849 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:05:38.148680  163849 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:05:38.153840  163849 start.go:563] Will wait 60s for crictl version
	I1026 15:05:38.153916  163849 ssh_runner.go:195] Run: which crictl
	I1026 15:05:38.157815  163849 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:05:38.196474  163849 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:05:38.196557  163849 ssh_runner.go:195] Run: crio --version
	I1026 15:05:38.225217  163849 ssh_runner.go:195] Run: crio --version
	I1026 15:05:38.254627  163849 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1026 15:05:38.258220  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:38.258645  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:38.258676  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:38.258843  163849 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 15:05:38.262986  163849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:05:38.277343  163849 kubeadm.go:883] updating cluster {Name:test-preload-195073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-195073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:05:38.277490  163849 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1026 15:05:38.277540  163849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:05:38.313724  163849 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1026 15:05:38.313797  163849 ssh_runner.go:195] Run: which lz4
	I1026 15:05:38.317896  163849 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:05:38.322543  163849 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:05:38.322582  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1026 15:05:39.724137  163849 crio.go:462] duration metric: took 1.406280004s to copy over tarball
	I1026 15:05:39.724247  163849 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:05:41.366345  163849 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.642064949s)
	I1026 15:05:41.366372  163849 crio.go:469] duration metric: took 1.642197463s to extract the tarball
	I1026 15:05:41.366380  163849 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:05:41.406139  163849 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:05:41.449952  163849 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:05:41.449984  163849 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:05:41.449995  163849 kubeadm.go:934] updating node { 192.168.39.157 8443 v1.32.0 crio true true} ...
	I1026 15:05:41.450152  163849 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-195073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-195073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:05:41.450234  163849 ssh_runner.go:195] Run: crio config
	I1026 15:05:41.498536  163849 cni.go:84] Creating CNI manager for ""
	I1026 15:05:41.498574  163849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:05:41.498607  163849 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:05:41.498638  163849 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.157 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-195073 NodeName:test-preload-195073 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:05:41.498780  163849 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-195073"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.157"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.157"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:05:41.498857  163849 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1026 15:05:41.510833  163849 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:05:41.510919  163849 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:05:41.522238  163849 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1026 15:05:41.541307  163849 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:05:41.560041  163849 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1026 15:05:41.579878  163849 ssh_runner.go:195] Run: grep 192.168.39.157	control-plane.minikube.internal$ /etc/hosts
	I1026 15:05:41.583929  163849 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:05:41.600026  163849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:05:41.737853  163849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:05:41.771769  163849 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073 for IP: 192.168.39.157
	I1026 15:05:41.771803  163849 certs.go:195] generating shared ca certs ...
	I1026 15:05:41.771833  163849 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:05:41.772043  163849 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:05:41.772129  163849 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:05:41.772149  163849 certs.go:257] generating profile certs ...
	I1026 15:05:41.772280  163849 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.key
	I1026 15:05:41.772366  163849 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/apiserver.key.1a5e8a53
	I1026 15:05:41.772418  163849 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/proxy-client.key
	I1026 15:05:41.772617  163849 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:05:41.772668  163849 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:05:41.772683  163849 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:05:41.772734  163849 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:05:41.772776  163849 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:05:41.772810  163849 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:05:41.772868  163849 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:05:41.773709  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:05:41.806324  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:05:41.839385  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:05:41.867011  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:05:41.894359  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:05:41.922543  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:05:41.949122  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:05:41.975626  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 15:05:42.002053  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:05:42.028044  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:05:42.054090  163849 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:05:42.079996  163849 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:05:42.098341  163849 ssh_runner.go:195] Run: openssl version
	I1026 15:05:42.103889  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:05:42.115096  163849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:05:42.119811  163849 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:05:42.119848  163849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:05:42.126339  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:05:42.138022  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:05:42.149524  163849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:05:42.154332  163849 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:05:42.154381  163849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:05:42.160963  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:05:42.172607  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:05:42.184063  163849 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:05:42.188740  163849 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:05:42.188780  163849 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:05:42.195236  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:05:42.207013  163849 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:05:42.211764  163849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:05:42.218506  163849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:05:42.225046  163849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:05:42.231622  163849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:05:42.238067  163849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:05:42.244479  163849 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:05:42.250945  163849 kubeadm.go:400] StartCluster: {Name:test-preload-195073 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-195073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:05:42.251019  163849 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:05:42.251084  163849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:05:42.287751  163849 cri.go:89] found id: ""
	I1026 15:05:42.287826  163849 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:05:42.299499  163849 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:05:42.299517  163849 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:05:42.299559  163849 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:05:42.309950  163849 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:05:42.310370  163849 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-195073" does not appear in /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:05:42.310519  163849 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-137233/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-195073" cluster setting kubeconfig missing "test-preload-195073" context setting]
	I1026 15:05:42.310775  163849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:05:42.311277  163849 kapi.go:59] client config for test-preload-195073: &rest.Config{Host:"https://192.168.39.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.key", CAFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 15:05:42.311663  163849 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 15:05:42.311676  163849 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 15:05:42.311680  163849 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 15:05:42.311684  163849 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 15:05:42.311687  163849 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 15:05:42.312017  163849 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:05:42.322003  163849 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.39.157
	I1026 15:05:42.322036  163849 kubeadm.go:1160] stopping kube-system containers ...
	I1026 15:05:42.322051  163849 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 15:05:42.322104  163849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:05:42.358386  163849 cri.go:89] found id: ""
	I1026 15:05:42.358446  163849 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 15:05:42.375121  163849 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:05:42.385434  163849 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:05:42.385450  163849 kubeadm.go:157] found existing configuration files:
	
	I1026 15:05:42.385501  163849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:05:42.395078  163849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:05:42.395116  163849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:05:42.405362  163849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:05:42.414752  163849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:05:42.414793  163849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:05:42.424982  163849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:05:42.434994  163849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:05:42.435031  163849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:05:42.445334  163849 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:05:42.455185  163849 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:05:42.455237  163849 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:05:42.465560  163849 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:05:42.476192  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:05:42.524654  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:05:43.262176  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:05:43.515760  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:05:43.574070  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:05:43.655084  163849 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:05:43.655168  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:44.156040  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:44.655552  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:45.156032  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:45.655611  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:46.155328  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:46.186422  163849 api_server.go:72] duration metric: took 2.53133269s to wait for apiserver process to appear ...
	I1026 15:05:46.186474  163849 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:05:46.186501  163849 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I1026 15:05:48.717982  163849 api_server.go:279] https://192.168.39.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:05:48.718011  163849 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:05:48.718028  163849 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I1026 15:05:48.760825  163849 api_server.go:279] https://192.168.39.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:05:48.760853  163849 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:05:49.187533  163849 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I1026 15:05:49.192219  163849 api_server.go:279] https://192.168.39.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:05:49.192242  163849 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:05:49.686853  163849 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I1026 15:05:49.695018  163849 api_server.go:279] https://192.168.39.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:05:49.695047  163849 api_server.go:103] status: https://192.168.39.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:05:50.186710  163849 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I1026 15:05:50.192628  163849 api_server.go:279] https://192.168.39.157:8443/healthz returned 200:
	ok
	I1026 15:05:50.200028  163849 api_server.go:141] control plane version: v1.32.0
	I1026 15:05:50.200056  163849 api_server.go:131] duration metric: took 4.013574483s to wait for apiserver health ...
	I1026 15:05:50.200067  163849 cni.go:84] Creating CNI manager for ""
	I1026 15:05:50.200074  163849 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:05:50.201790  163849 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:05:50.203240  163849 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:05:50.226534  163849 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:05:50.253966  163849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:05:50.260255  163849 system_pods.go:59] 7 kube-system pods found
	I1026 15:05:50.260292  163849 system_pods.go:61] "coredns-668d6bf9bc-xs52d" [8e18ac3a-4a5a-4595-83bd-cafd92d158dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:05:50.260299  163849 system_pods.go:61] "etcd-test-preload-195073" [86defca2-5d50-4a8b-9a0e-c7895f0bdd08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:05:50.260310  163849 system_pods.go:61] "kube-apiserver-test-preload-195073" [eb687fd5-2a62-48d8-8a14-ccf114c0719b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:05:50.260317  163849 system_pods.go:61] "kube-controller-manager-test-preload-195073" [c1b79768-6642-4d7a-a93b-b90b7708a9b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:05:50.260322  163849 system_pods.go:61] "kube-proxy-2xj78" [5d1f5c28-cffa-4855-900e-30c7fc43581d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:05:50.260333  163849 system_pods.go:61] "kube-scheduler-test-preload-195073" [d00b9419-7cdb-4ec0-b4d3-9ca5606d8f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:05:50.260342  163849 system_pods.go:61] "storage-provisioner" [7abc0516-e913-4e8a-8b44-62afa4b30c27] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:05:50.260350  163849 system_pods.go:74] duration metric: took 6.359049ms to wait for pod list to return data ...
	I1026 15:05:50.260360  163849 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:05:50.264171  163849 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:05:50.264196  163849 node_conditions.go:123] node cpu capacity is 2
	I1026 15:05:50.264209  163849 node_conditions.go:105] duration metric: took 3.844359ms to run NodePressure ...
	I1026 15:05:50.264257  163849 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:05:50.534981  163849 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1026 15:05:50.538428  163849 kubeadm.go:743] kubelet initialised
	I1026 15:05:50.538451  163849 kubeadm.go:744] duration metric: took 3.447165ms waiting for restarted kubelet to initialise ...
	I1026 15:05:50.538482  163849 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:05:50.562718  163849 ops.go:34] apiserver oom_adj: -16
	I1026 15:05:50.562742  163849 kubeadm.go:601] duration metric: took 8.263218392s to restartPrimaryControlPlane
	I1026 15:05:50.562752  163849 kubeadm.go:402] duration metric: took 8.311813846s to StartCluster
	I1026 15:05:50.562771  163849 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:05:50.562842  163849 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:05:50.563510  163849 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:05:50.563738  163849 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.157 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:05:50.563824  163849 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:05:50.563957  163849 addons.go:69] Setting storage-provisioner=true in profile "test-preload-195073"
	I1026 15:05:50.563981  163849 addons.go:238] Setting addon storage-provisioner=true in "test-preload-195073"
	W1026 15:05:50.563998  163849 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:05:50.563978  163849 addons.go:69] Setting default-storageclass=true in profile "test-preload-195073"
	I1026 15:05:50.564022  163849 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-195073"
	I1026 15:05:50.564033  163849 host.go:66] Checking if "test-preload-195073" exists ...
	I1026 15:05:50.563999  163849 config.go:182] Loaded profile config "test-preload-195073": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1026 15:05:50.566598  163849 kapi.go:59] client config for test-preload-195073: &rest.Config{Host:"https://192.168.39.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.key", CAFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 15:05:50.566858  163849 addons.go:238] Setting addon default-storageclass=true in "test-preload-195073"
	W1026 15:05:50.566870  163849 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:05:50.566890  163849 host.go:66] Checking if "test-preload-195073" exists ...
	I1026 15:05:50.567700  163849 out.go:179] * Verifying Kubernetes components...
	I1026 15:05:50.568214  163849 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:05:50.568231  163849 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:05:50.570441  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:50.570798  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:50.570838  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:50.570958  163849 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/id_rsa Username:docker}
	I1026 15:05:50.573602  163849 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:05:50.574515  163849 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:05:50.575317  163849 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:05:50.575332  163849 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:05:50.577374  163849 main.go:141] libmachine: domain test-preload-195073 has defined MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:50.577732  163849 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:6a:cc:9f", ip: ""} in network mk-test-preload-195073: {Iface:virbr1 ExpiryTime:2025-10-26 16:05:32 +0000 UTC Type:0 Mac:52:54:00:6a:cc:9f Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:test-preload-195073 Clientid:01:52:54:00:6a:cc:9f}
	I1026 15:05:50.577761  163849 main.go:141] libmachine: domain test-preload-195073 has defined IP address 192.168.39.157 and MAC address 52:54:00:6a:cc:9f in network mk-test-preload-195073
	I1026 15:05:50.577934  163849 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/test-preload-195073/id_rsa Username:docker}
	I1026 15:05:50.825268  163849 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:05:50.860681  163849 node_ready.go:35] waiting up to 6m0s for node "test-preload-195073" to be "Ready" ...
	I1026 15:05:50.866107  163849 node_ready.go:49] node "test-preload-195073" is "Ready"
	I1026 15:05:50.866160  163849 node_ready.go:38] duration metric: took 5.395872ms for node "test-preload-195073" to be "Ready" ...
	I1026 15:05:50.866183  163849 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:05:50.866271  163849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:05:50.889475  163849 api_server.go:72] duration metric: took 325.685396ms to wait for apiserver process to appear ...
	I1026 15:05:50.889514  163849 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:05:50.889541  163849 api_server.go:253] Checking apiserver healthz at https://192.168.39.157:8443/healthz ...
	I1026 15:05:50.902168  163849 api_server.go:279] https://192.168.39.157:8443/healthz returned 200:
	ok
	I1026 15:05:50.903673  163849 api_server.go:141] control plane version: v1.32.0
	I1026 15:05:50.903701  163849 api_server.go:131] duration metric: took 14.177668ms to wait for apiserver health ...
	I1026 15:05:50.903713  163849 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:05:50.907733  163849 system_pods.go:59] 7 kube-system pods found
	I1026 15:05:50.907769  163849 system_pods.go:61] "coredns-668d6bf9bc-xs52d" [8e18ac3a-4a5a-4595-83bd-cafd92d158dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:05:50.907777  163849 system_pods.go:61] "etcd-test-preload-195073" [86defca2-5d50-4a8b-9a0e-c7895f0bdd08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:05:50.907788  163849 system_pods.go:61] "kube-apiserver-test-preload-195073" [eb687fd5-2a62-48d8-8a14-ccf114c0719b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:05:50.907802  163849 system_pods.go:61] "kube-controller-manager-test-preload-195073" [c1b79768-6642-4d7a-a93b-b90b7708a9b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:05:50.907809  163849 system_pods.go:61] "kube-proxy-2xj78" [5d1f5c28-cffa-4855-900e-30c7fc43581d] Running
	I1026 15:05:50.907821  163849 system_pods.go:61] "kube-scheduler-test-preload-195073" [d00b9419-7cdb-4ec0-b4d3-9ca5606d8f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:05:50.907827  163849 system_pods.go:61] "storage-provisioner" [7abc0516-e913-4e8a-8b44-62afa4b30c27] Running
	I1026 15:05:50.907836  163849 system_pods.go:74] duration metric: took 4.115122ms to wait for pod list to return data ...
	I1026 15:05:50.907849  163849 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:05:50.910642  163849 default_sa.go:45] found service account: "default"
	I1026 15:05:50.910667  163849 default_sa.go:55] duration metric: took 2.81075ms for default service account to be created ...
	I1026 15:05:50.910680  163849 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:05:50.914508  163849 system_pods.go:86] 7 kube-system pods found
	I1026 15:05:50.914542  163849 system_pods.go:89] "coredns-668d6bf9bc-xs52d" [8e18ac3a-4a5a-4595-83bd-cafd92d158dc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:05:50.914552  163849 system_pods.go:89] "etcd-test-preload-195073" [86defca2-5d50-4a8b-9a0e-c7895f0bdd08] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:05:50.914563  163849 system_pods.go:89] "kube-apiserver-test-preload-195073" [eb687fd5-2a62-48d8-8a14-ccf114c0719b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:05:50.914570  163849 system_pods.go:89] "kube-controller-manager-test-preload-195073" [c1b79768-6642-4d7a-a93b-b90b7708a9b4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:05:50.914576  163849 system_pods.go:89] "kube-proxy-2xj78" [5d1f5c28-cffa-4855-900e-30c7fc43581d] Running
	I1026 15:05:50.914585  163849 system_pods.go:89] "kube-scheduler-test-preload-195073" [d00b9419-7cdb-4ec0-b4d3-9ca5606d8f24] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:05:50.914594  163849 system_pods.go:89] "storage-provisioner" [7abc0516-e913-4e8a-8b44-62afa4b30c27] Running
	I1026 15:05:50.914605  163849 system_pods.go:126] duration metric: took 3.91647ms to wait for k8s-apps to be running ...
	I1026 15:05:50.914618  163849 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:05:50.914677  163849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:05:50.936720  163849 system_svc.go:56] duration metric: took 22.090979ms WaitForService to wait for kubelet
	I1026 15:05:50.936755  163849 kubeadm.go:586] duration metric: took 372.987572ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:05:50.936778  163849 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:05:50.939778  163849 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:05:50.939802  163849 node_conditions.go:123] node cpu capacity is 2
	I1026 15:05:50.939816  163849 node_conditions.go:105] duration metric: took 3.032193ms to run NodePressure ...
	I1026 15:05:50.939831  163849 start.go:241] waiting for startup goroutines ...
	I1026 15:05:50.976208  163849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:05:50.981997  163849 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:05:51.624328  163849 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 15:05:51.625327  163849 addons.go:514] duration metric: took 1.061506147s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 15:05:51.625379  163849 start.go:246] waiting for cluster config update ...
	I1026 15:05:51.625394  163849 start.go:255] writing updated cluster config ...
	I1026 15:05:51.625783  163849 ssh_runner.go:195] Run: rm -f paused
	I1026 15:05:51.631537  163849 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:05:51.632058  163849 kapi.go:59] client config for test-preload-195073: &rest.Config{Host:"https://192.168.39.157:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/test-preload-195073/client.key", CAFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 15:05:51.635752  163849 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-xs52d" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:05:53.644899  163849 pod_ready.go:104] pod "coredns-668d6bf9bc-xs52d" is not "Ready", error: <nil>
	I1026 15:05:55.141736  163849 pod_ready.go:94] pod "coredns-668d6bf9bc-xs52d" is "Ready"
	I1026 15:05:55.141763  163849 pod_ready.go:86] duration metric: took 3.505989482s for pod "coredns-668d6bf9bc-xs52d" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:05:55.144572  163849 pod_ready.go:83] waiting for pod "etcd-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:05:57.150202  163849 pod_ready.go:104] pod "etcd-test-preload-195073" is not "Ready", error: <nil>
	W1026 15:05:59.150893  163849 pod_ready.go:104] pod "etcd-test-preload-195073" is not "Ready", error: <nil>
	W1026 15:06:01.650504  163849 pod_ready.go:104] pod "etcd-test-preload-195073" is not "Ready", error: <nil>
	I1026 15:06:03.150708  163849 pod_ready.go:94] pod "etcd-test-preload-195073" is "Ready"
	I1026 15:06:03.150738  163849 pod_ready.go:86] duration metric: took 8.006142248s for pod "etcd-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.152620  163849 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.156721  163849 pod_ready.go:94] pod "kube-apiserver-test-preload-195073" is "Ready"
	I1026 15:06:03.156745  163849 pod_ready.go:86] duration metric: took 4.099075ms for pod "kube-apiserver-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.158504  163849 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.162079  163849 pod_ready.go:94] pod "kube-controller-manager-test-preload-195073" is "Ready"
	I1026 15:06:03.162096  163849 pod_ready.go:86] duration metric: took 3.576747ms for pod "kube-controller-manager-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.163900  163849 pod_ready.go:83] waiting for pod "kube-proxy-2xj78" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.348681  163849 pod_ready.go:94] pod "kube-proxy-2xj78" is "Ready"
	I1026 15:06:03.348711  163849 pod_ready.go:86] duration metric: took 184.786878ms for pod "kube-proxy-2xj78" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.548973  163849 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.949151  163849 pod_ready.go:94] pod "kube-scheduler-test-preload-195073" is "Ready"
	I1026 15:06:03.949179  163849 pod_ready.go:86] duration metric: took 400.180528ms for pod "kube-scheduler-test-preload-195073" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:06:03.949191  163849 pod_ready.go:40] duration metric: took 12.317599576s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:06:03.993190  163849 start.go:624] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1026 15:06:03.995077  163849 out.go:203] 
	W1026 15:06:03.996282  163849 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1026 15:06:03.997508  163849 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:06:03.998663  163849 out.go:179] * Done! kubectl is now configured to use "test-preload-195073" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.769812686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491164769765603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62ddd678-2f5b-42b7-ad29-dd0539b1824c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.770416735Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc755f38-c05b-4e41-a68e-e01d0a1e6d1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.770468040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc755f38-c05b-4e41-a68e-e01d0a1e6d1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.770629432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee90c6f8b1db112a8c0e6815132aafa1c214028f9ad0468d434277bed95b2197,PodSandboxId:8771c111a152acef16ffe29877aa8e41bf0dd954b312b080aaba5c56c0d19231,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761491153625538488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xs52d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e18ac3a-4a5a-4595-83bd-cafd92d158dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a071d3cc0fc1e733be5d8af87dd2a29c33d07d82762701b2b1cdd50fafb7cda,PodSandboxId:56f40bb1ea0a4f08086b504754c562b6d34a4391f848bff501584d9d1d76c04b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761491150059956744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xj78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5d1f5c28-cffa-4855-900e-30c7fc43581d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fd910cebcbabea8f25f996e5e04e05f08666e37c61663f8460653dba5e739f,PodSandboxId:546b25dd4f5557127232aad634bd3b9783af16f84cd1480da0ea6607b1409880,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761491150013564900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
bc0516-e913-4e8a-8b44-62afa4b30c27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a756210472a1f6cf844f5a22d41b1051b32ef8097342b05b9d8697aedd45b154,PodSandboxId:c70c2ab63d2d50f8b02a9ad186b218063c5e132612fb01e6c6e9a6b90772ba7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761491145815670539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51fd5afec44f106d3aaf6fdac3394070,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268f66814be11074730c39a10b4bf594ab6bb81cc0713307e38ecaac332156e,PodSandboxId:b4da974ce0e62bac94fac418357049200d14f22d8ea844ff214bd83d48b43175,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761491145780772640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ad9915059512fee42944f
f168a92fc,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abe710c514dc9f58e8915c0b13decfe12f9b735306c10f466d0053dd8384fff,PodSandboxId:ccc3e1d0a3933629ff6e0d57c674a5e2b6d52800c6e1228c2cc8c95cce35d7af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761491145772619403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f95f0e02d35fbbe719e0757cb281bb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670bf67f0aaf1afd4b87a2d76eef2752aebca0b7d56e01479cd633a7c2fe2936,PodSandboxId:99d6573d278e185f084ac155b155f622cb2da542b061ec28bea3f7b4f1a194d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761491145711770212,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d7f67a4d7711e1bcb3887f7daf0080,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc755f38-c05b-4e41-a68e-e01d0a1e6d1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.807736579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f23f54b-dbdb-4ec2-9073-307fc5b3f2ef name=/runtime.v1.RuntimeService/Version
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.807805217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f23f54b-dbdb-4ec2-9073-307fc5b3f2ef name=/runtime.v1.RuntimeService/Version
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.808761927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e7bed36-d3fa-4ff5-b829-43bfc275e2b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.809194627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491164809172035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e7bed36-d3fa-4ff5-b829-43bfc275e2b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.809704017Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ec0268d-0dc8-4317-b03c-7573a6dd34e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.809753851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ec0268d-0dc8-4317-b03c-7573a6dd34e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.810596586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee90c6f8b1db112a8c0e6815132aafa1c214028f9ad0468d434277bed95b2197,PodSandboxId:8771c111a152acef16ffe29877aa8e41bf0dd954b312b080aaba5c56c0d19231,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761491153625538488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xs52d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e18ac3a-4a5a-4595-83bd-cafd92d158dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a071d3cc0fc1e733be5d8af87dd2a29c33d07d82762701b2b1cdd50fafb7cda,PodSandboxId:56f40bb1ea0a4f08086b504754c562b6d34a4391f848bff501584d9d1d76c04b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761491150059956744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xj78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5d1f5c28-cffa-4855-900e-30c7fc43581d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fd910cebcbabea8f25f996e5e04e05f08666e37c61663f8460653dba5e739f,PodSandboxId:546b25dd4f5557127232aad634bd3b9783af16f84cd1480da0ea6607b1409880,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761491150013564900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
bc0516-e913-4e8a-8b44-62afa4b30c27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a756210472a1f6cf844f5a22d41b1051b32ef8097342b05b9d8697aedd45b154,PodSandboxId:c70c2ab63d2d50f8b02a9ad186b218063c5e132612fb01e6c6e9a6b90772ba7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761491145815670539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51fd5afec44f106d3aaf6fdac3394070,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268f66814be11074730c39a10b4bf594ab6bb81cc0713307e38ecaac332156e,PodSandboxId:b4da974ce0e62bac94fac418357049200d14f22d8ea844ff214bd83d48b43175,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761491145780772640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ad9915059512fee42944f
f168a92fc,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abe710c514dc9f58e8915c0b13decfe12f9b735306c10f466d0053dd8384fff,PodSandboxId:ccc3e1d0a3933629ff6e0d57c674a5e2b6d52800c6e1228c2cc8c95cce35d7af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761491145772619403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f95f0e02d35fbbe719e0757cb281bb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670bf67f0aaf1afd4b87a2d76eef2752aebca0b7d56e01479cd633a7c2fe2936,PodSandboxId:99d6573d278e185f084ac155b155f622cb2da542b061ec28bea3f7b4f1a194d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761491145711770212,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d7f67a4d7711e1bcb3887f7daf0080,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ec0268d-0dc8-4317-b03c-7573a6dd34e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.848174091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df72ed22-3e5c-48c6-9288-1984965f9642 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.848368621Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df72ed22-3e5c-48c6-9288-1984965f9642 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.849728718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=30e13ac2-60fe-4021-a54d-b42ad79c9f52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.850444335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491164850383579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30e13ac2-60fe-4021-a54d-b42ad79c9f52 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.851216625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e2c3364-beff-465b-8a0a-b99d0a810e36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.851325134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e2c3364-beff-465b-8a0a-b99d0a810e36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.851566161Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee90c6f8b1db112a8c0e6815132aafa1c214028f9ad0468d434277bed95b2197,PodSandboxId:8771c111a152acef16ffe29877aa8e41bf0dd954b312b080aaba5c56c0d19231,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761491153625538488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xs52d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e18ac3a-4a5a-4595-83bd-cafd92d158dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a071d3cc0fc1e733be5d8af87dd2a29c33d07d82762701b2b1cdd50fafb7cda,PodSandboxId:56f40bb1ea0a4f08086b504754c562b6d34a4391f848bff501584d9d1d76c04b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761491150059956744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xj78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5d1f5c28-cffa-4855-900e-30c7fc43581d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fd910cebcbabea8f25f996e5e04e05f08666e37c61663f8460653dba5e739f,PodSandboxId:546b25dd4f5557127232aad634bd3b9783af16f84cd1480da0ea6607b1409880,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761491150013564900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
bc0516-e913-4e8a-8b44-62afa4b30c27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a756210472a1f6cf844f5a22d41b1051b32ef8097342b05b9d8697aedd45b154,PodSandboxId:c70c2ab63d2d50f8b02a9ad186b218063c5e132612fb01e6c6e9a6b90772ba7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761491145815670539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51fd5afec44f106d3aaf6fdac3394070,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268f66814be11074730c39a10b4bf594ab6bb81cc0713307e38ecaac332156e,PodSandboxId:b4da974ce0e62bac94fac418357049200d14f22d8ea844ff214bd83d48b43175,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761491145780772640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ad9915059512fee42944f
f168a92fc,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abe710c514dc9f58e8915c0b13decfe12f9b735306c10f466d0053dd8384fff,PodSandboxId:ccc3e1d0a3933629ff6e0d57c674a5e2b6d52800c6e1228c2cc8c95cce35d7af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761491145772619403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f95f0e02d35fbbe719e0757cb281bb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670bf67f0aaf1afd4b87a2d76eef2752aebca0b7d56e01479cd633a7c2fe2936,PodSandboxId:99d6573d278e185f084ac155b155f622cb2da542b061ec28bea3f7b4f1a194d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761491145711770212,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d7f67a4d7711e1bcb3887f7daf0080,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e2c3364-beff-465b-8a0a-b99d0a810e36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.886861280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a70d2cb6-cedb-4bde-8a3b-ee79edae4a96 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.886972936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a70d2cb6-cedb-4bde-8a3b-ee79edae4a96 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.888231997Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c9a0017-ae40-4fa6-b1d9-3847f6552e0b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.888981409Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491164888902386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c9a0017-ae40-4fa6-b1d9-3847f6552e0b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.889584311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9a5f506-77a0-42b2-9c76-359b13e3c644 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.889649286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9a5f506-77a0-42b2-9c76-359b13e3c644 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:06:04 test-preload-195073 crio[833]: time="2025-10-26 15:06:04.889860917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ee90c6f8b1db112a8c0e6815132aafa1c214028f9ad0468d434277bed95b2197,PodSandboxId:8771c111a152acef16ffe29877aa8e41bf0dd954b312b080aaba5c56c0d19231,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761491153625538488,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-xs52d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e18ac3a-4a5a-4595-83bd-cafd92d158dc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a071d3cc0fc1e733be5d8af87dd2a29c33d07d82762701b2b1cdd50fafb7cda,PodSandboxId:56f40bb1ea0a4f08086b504754c562b6d34a4391f848bff501584d9d1d76c04b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761491150059956744,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2xj78,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 5d1f5c28-cffa-4855-900e-30c7fc43581d,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29fd910cebcbabea8f25f996e5e04e05f08666e37c61663f8460653dba5e739f,PodSandboxId:546b25dd4f5557127232aad634bd3b9783af16f84cd1480da0ea6607b1409880,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761491150013564900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a
bc0516-e913-4e8a-8b44-62afa4b30c27,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a756210472a1f6cf844f5a22d41b1051b32ef8097342b05b9d8697aedd45b154,PodSandboxId:c70c2ab63d2d50f8b02a9ad186b218063c5e132612fb01e6c6e9a6b90772ba7c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761491145815670539,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 51fd5afec44f106d3aaf6fdac3394070,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8268f66814be11074730c39a10b4bf594ab6bb81cc0713307e38ecaac332156e,PodSandboxId:b4da974ce0e62bac94fac418357049200d14f22d8ea844ff214bd83d48b43175,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761491145780772640,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8ad9915059512fee42944f
f168a92fc,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9abe710c514dc9f58e8915c0b13decfe12f9b735306c10f466d0053dd8384fff,PodSandboxId:ccc3e1d0a3933629ff6e0d57c674a5e2b6d52800c6e1228c2cc8c95cce35d7af,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761491145772619403,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f95f0e02d35fbbe719e0757cb281bb0,}
,Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:670bf67f0aaf1afd4b87a2d76eef2752aebca0b7d56e01479cd633a7c2fe2936,PodSandboxId:99d6573d278e185f084ac155b155f622cb2da542b061ec28bea3f7b4f1a194d8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761491145711770212,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-195073,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 04d7f67a4d7711e1bcb3887f7daf0080,},Annotation
s:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9a5f506-77a0-42b2-9c76-359b13e3c644 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ee90c6f8b1db1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   11 seconds ago      Running             coredns                   1                   8771c111a152a       coredns-668d6bf9bc-xs52d
	5a071d3cc0fc1       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   14 seconds ago      Running             kube-proxy                1                   56f40bb1ea0a4       kube-proxy-2xj78
	29fd910cebcba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   546b25dd4f555       storage-provisioner
	a756210472a1f       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   19 seconds ago      Running             kube-controller-manager   1                   c70c2ab63d2d5       kube-controller-manager-test-preload-195073
	8268f66814be1       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   19 seconds ago      Running             etcd                      1                   b4da974ce0e62       etcd-test-preload-195073
	9abe710c514dc       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   19 seconds ago      Running             kube-apiserver            1                   ccc3e1d0a3933       kube-apiserver-test-preload-195073
	670bf67f0aaf1       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   19 seconds ago      Running             kube-scheduler            1                   99d6573d278e1       kube-scheduler-test-preload-195073
	
	
	==> coredns [ee90c6f8b1db112a8c0e6815132aafa1c214028f9ad0468d434277bed95b2197] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50866 - 40666 "HINFO IN 3310450744244616147.5927765773471255486. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02383527s
	
	
	==> describe nodes <==
	Name:               test-preload-195073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-195073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=test-preload-195073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_04_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-195073
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:05:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:05:50 +0000   Sun, 26 Oct 2025 15:04:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:05:50 +0000   Sun, 26 Oct 2025 15:04:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:05:50 +0000   Sun, 26 Oct 2025 15:04:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:05:50 +0000   Sun, 26 Oct 2025 15:05:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.157
	  Hostname:    test-preload-195073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 d438be911d6942bfb8457d06daa82dbc
	  System UUID:                d438be91-1d69-42bf-b845-7d06daa82dbc
	  Boot ID:                    cc85fa34-a3a4-4bd5-8a95-f2b7b46a1420
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-xs52d                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     76s
	  kube-system                 etcd-test-preload-195073                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         82s
	  kube-system                 kube-apiserver-test-preload-195073             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-test-preload-195073    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-2xj78                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-test-preload-195073             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 75s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node test-preload-195073 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node test-preload-195073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node test-preload-195073 status is now: NodeHasSufficientPID
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Normal   NodeReady                80s                kubelet          Node test-preload-195073 status is now: NodeReady
	  Normal   RegisteredNode           77s                node-controller  Node test-preload-195073 event: Registered Node test-preload-195073 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-195073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-195073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-195073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-195073 has been rebooted, boot id: cc85fa34-a3a4-4bd5-8a95-f2b7b46a1420
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-195073 event: Registered Node test-preload-195073 in Controller
	
	
	==> dmesg <==
	[Oct26 15:05] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000046] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005139] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.065643] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.081167] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.098606] kauditd_printk_skb: 74 callbacks suppressed
	[  +6.485587] kauditd_printk_skb: 177 callbacks suppressed
	[Oct26 15:06] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [8268f66814be11074730c39a10b4bf594ab6bb81cc0713307e38ecaac332156e] <==
	{"level":"info","ts":"2025-10-26T15:05:46.212956Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"56d140a2e4073e49","local-member-id":"678d6d65e7bf3019","added-peer-id":"678d6d65e7bf3019","added-peer-peer-urls":["https://192.168.39.157:2380"]}
	{"level":"info","ts":"2025-10-26T15:05:46.213081Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56d140a2e4073e49","local-member-id":"678d6d65e7bf3019","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:05:46.213104Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T15:05:46.216714Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T15:05:46.221851Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T15:05:46.222089Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.157:2380"}
	{"level":"info","ts":"2025-10-26T15:05:46.222119Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.157:2380"}
	{"level":"info","ts":"2025-10-26T15:05:46.224681Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"678d6d65e7bf3019","initial-advertise-peer-urls":["https://192.168.39.157:2380"],"listen-peer-urls":["https://192.168.39.157:2380"],"advertise-client-urls":["https://192.168.39.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T15:05:46.224771Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T15:05:47.672214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T15:05:47.672250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T15:05:47.672264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 received MsgPreVoteResp from 678d6d65e7bf3019 at term 2"}
	{"level":"info","ts":"2025-10-26T15:05:47.672274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T15:05:47.672297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 received MsgVoteResp from 678d6d65e7bf3019 at term 3"}
	{"level":"info","ts":"2025-10-26T15:05:47.672306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678d6d65e7bf3019 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T15:05:47.672312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678d6d65e7bf3019 elected leader 678d6d65e7bf3019 at term 3"}
	{"level":"info","ts":"2025-10-26T15:05:47.673845Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"678d6d65e7bf3019","local-member-attributes":"{Name:test-preload-195073 ClientURLs:[https://192.168.39.157:2379]}","request-path":"/0/members/678d6d65e7bf3019/attributes","cluster-id":"56d140a2e4073e49","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T15:05:47.673852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:05:47.674132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T15:05:47.674159Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T15:05:47.674228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T15:05:47.674994Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T15:05:47.675702Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-10-26T15:05:47.676199Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T15:05:47.676579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.157:2379"}
	
	
	==> kernel <==
	 15:06:05 up 0 min,  0 users,  load average: 1.27, 0.33, 0.11
	Linux test-preload-195073 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9abe710c514dc9f58e8915c0b13decfe12f9b735306c10f466d0053dd8384fff] <==
	I1026 15:05:48.796907       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:05:48.796917       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:05:48.796924       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:05:48.842890       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1026 15:05:48.852558       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1026 15:05:48.852571       1 policy_source.go:240] refreshing policies
	I1026 15:05:48.864939       1 shared_informer.go:320] Caches are synced for configmaps
	I1026 15:05:48.864998       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 15:05:48.865010       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 15:05:48.865459       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1026 15:05:48.867693       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 15:05:48.867846       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 15:05:48.868073       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 15:05:48.870373       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:05:48.875263       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1026 15:05:48.876720       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 15:05:49.685247       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 15:05:49.685459       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:05:50.345253       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 15:05:50.393511       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 15:05:50.424531       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:05:50.430932       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:05:52.024864       1 controller.go:615] quota admission added evaluator for: endpoints
	I1026 15:05:52.327912       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1026 15:05:52.376224       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a756210472a1f6cf844f5a22d41b1051b32ef8097342b05b9d8697aedd45b154] <==
	I1026 15:05:51.973872       1 shared_informer.go:320] Caches are synced for GC
	I1026 15:05:51.975044       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1026 15:05:51.978473       1 shared_informer.go:320] Caches are synced for namespace
	I1026 15:05:51.979633       1 shared_informer.go:320] Caches are synced for resource quota
	I1026 15:05:51.979684       1 shared_informer.go:320] Caches are synced for node
	I1026 15:05:51.979710       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:05:51.979757       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:05:51.979763       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1026 15:05:51.979767       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1026 15:05:51.979813       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="test-preload-195073"
	I1026 15:05:51.981885       1 shared_informer.go:320] Caches are synced for garbage collector
	I1026 15:05:51.981898       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 15:05:51.981903       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 15:05:51.982741       1 shared_informer.go:320] Caches are synced for HPA
	I1026 15:05:51.987265       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1026 15:05:51.995467       1 shared_informer.go:320] Caches are synced for garbage collector
	I1026 15:05:51.997598       1 shared_informer.go:320] Caches are synced for PV protection
	I1026 15:05:52.004955       1 shared_informer.go:320] Caches are synced for stateful set
	I1026 15:05:52.018140       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1026 15:05:52.022539       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1026 15:05:52.335477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="362.919699ms"
	I1026 15:05:52.335990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="409.789µs"
	I1026 15:05:54.715655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="43.246µs"
	I1026 15:05:54.746722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="15.586644ms"
	I1026 15:05:54.746883       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="67.749µs"
	
	
	==> kube-proxy [5a071d3cc0fc1e733be5d8af87dd2a29c33d07d82762701b2b1cdd50fafb7cda] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 15:05:50.283942       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 15:05:50.298187       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.157"]
	E1026 15:05:50.298275       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:05:50.362637       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1026 15:05:50.362679       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:05:50.362704       1 server_linux.go:170] "Using iptables Proxier"
	I1026 15:05:50.366593       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:05:50.367144       1 server.go:497] "Version info" version="v1.32.0"
	I1026 15:05:50.367170       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:05:50.370137       1 config.go:199] "Starting service config controller"
	I1026 15:05:50.370161       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 15:05:50.370186       1 config.go:105] "Starting endpoint slice config controller"
	I1026 15:05:50.370189       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 15:05:50.371194       1 config.go:329] "Starting node config controller"
	I1026 15:05:50.371218       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 15:05:50.471133       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 15:05:50.471165       1 shared_informer.go:320] Caches are synced for service config
	I1026 15:05:50.471563       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [670bf67f0aaf1afd4b87a2d76eef2752aebca0b7d56e01479cd633a7c2fe2936] <==
	I1026 15:05:46.403553       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:05:48.711347       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:05:48.712928       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:05:48.712978       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:05:48.712998       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:05:48.784345       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1026 15:05:48.784497       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:05:48.789234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:05:48.789308       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 15:05:48.791354       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 15:05:48.792279       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:05:48.889537       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: E1026 15:05:48.914427    1155 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-195073\" already exists" pod="kube-system/etcd-test-preload-195073"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: I1026 15:05:48.914650    1155 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-195073"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: E1026 15:05:48.924717    1155 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-195073\" already exists" pod="kube-system/kube-apiserver-test-preload-195073"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: I1026 15:05:48.926481    1155 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-195073"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: I1026 15:05:48.926608    1155 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-195073"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: I1026 15:05:48.926666    1155 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: I1026 15:05:48.927804    1155 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:05:48 test-preload-195073 kubelet[1155]: I1026 15:05:48.928684    1155 setters.go:602] "Node became not ready" node="test-preload-195073" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-26T15:05:48Z","lastTransitionTime":"2025-10-26T15:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: I1026 15:05:49.554802    1155 apiserver.go:52] "Watching apiserver"
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: E1026 15:05:49.568287    1155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-xs52d" podUID="8e18ac3a-4a5a-4595-83bd-cafd92d158dc"
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: I1026 15:05:49.582461    1155 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: I1026 15:05:49.668225    1155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d1f5c28-cffa-4855-900e-30c7fc43581d-xtables-lock\") pod \"kube-proxy-2xj78\" (UID: \"5d1f5c28-cffa-4855-900e-30c7fc43581d\") " pod="kube-system/kube-proxy-2xj78"
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: I1026 15:05:49.668477    1155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d1f5c28-cffa-4855-900e-30c7fc43581d-lib-modules\") pod \"kube-proxy-2xj78\" (UID: \"5d1f5c28-cffa-4855-900e-30c7fc43581d\") " pod="kube-system/kube-proxy-2xj78"
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: I1026 15:05:49.668506    1155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7abc0516-e913-4e8a-8b44-62afa4b30c27-tmp\") pod \"storage-provisioner\" (UID: \"7abc0516-e913-4e8a-8b44-62afa4b30c27\") " pod="kube-system/storage-provisioner"
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: E1026 15:05:49.668866    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 26 15:05:49 test-preload-195073 kubelet[1155]: E1026 15:05:49.668923    1155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e18ac3a-4a5a-4595-83bd-cafd92d158dc-config-volume podName:8e18ac3a-4a5a-4595-83bd-cafd92d158dc nodeName:}" failed. No retries permitted until 2025-10-26 15:05:50.168906213 +0000 UTC m=+6.697812161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8e18ac3a-4a5a-4595-83bd-cafd92d158dc-config-volume") pod "coredns-668d6bf9bc-xs52d" (UID: "8e18ac3a-4a5a-4595-83bd-cafd92d158dc") : object "kube-system"/"coredns" not registered
	Oct 26 15:05:50 test-preload-195073 kubelet[1155]: E1026 15:05:50.171645    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 26 15:05:50 test-preload-195073 kubelet[1155]: E1026 15:05:50.171714    1155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e18ac3a-4a5a-4595-83bd-cafd92d158dc-config-volume podName:8e18ac3a-4a5a-4595-83bd-cafd92d158dc nodeName:}" failed. No retries permitted until 2025-10-26 15:05:51.171699704 +0000 UTC m=+7.700605652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8e18ac3a-4a5a-4595-83bd-cafd92d158dc-config-volume") pod "coredns-668d6bf9bc-xs52d" (UID: "8e18ac3a-4a5a-4595-83bd-cafd92d158dc") : object "kube-system"/"coredns" not registered
	Oct 26 15:05:50 test-preload-195073 kubelet[1155]: I1026 15:05:50.621477    1155 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Oct 26 15:05:51 test-preload-195073 kubelet[1155]: E1026 15:05:51.180481    1155 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 26 15:05:51 test-preload-195073 kubelet[1155]: E1026 15:05:51.180603    1155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e18ac3a-4a5a-4595-83bd-cafd92d158dc-config-volume podName:8e18ac3a-4a5a-4595-83bd-cafd92d158dc nodeName:}" failed. No retries permitted until 2025-10-26 15:05:53.180579083 +0000 UTC m=+9.709485021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8e18ac3a-4a5a-4595-83bd-cafd92d158dc-config-volume") pod "coredns-668d6bf9bc-xs52d" (UID: "8e18ac3a-4a5a-4595-83bd-cafd92d158dc") : object "kube-system"/"coredns" not registered
	Oct 26 15:05:53 test-preload-195073 kubelet[1155]: E1026 15:05:53.638640    1155 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491153637584120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 15:05:53 test-preload-195073 kubelet[1155]: E1026 15:05:53.638662    1155 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491153637584120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 15:06:03 test-preload-195073 kubelet[1155]: E1026 15:06:03.640503    1155 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491163639658760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 26 15:06:03 test-preload-195073 kubelet[1155]: E1026 15:06:03.640525    1155 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491163639658760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [29fd910cebcbabea8f25f996e5e04e05f08666e37c61663f8460653dba5e739f] <==
	I1026 15:05:50.128546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-195073 -n test-preload-195073
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-195073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-195073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-195073
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-195073: (1.042235002s)
--- FAIL: TestPreload (131.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (379.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-750553 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-750553 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (6m16.420803401s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-750553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-750553" primary control-plane node in "pause-750553" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-750553" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:12:13.300846  170754 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:12:13.300983  170754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:13.300989  170754 out.go:374] Setting ErrFile to fd 2...
	I1026 15:12:13.300996  170754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:12:13.301939  170754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:12:13.302674  170754 out.go:368] Setting JSON to false
	I1026 15:12:13.304190  170754 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6867,"bootTime":1761484666,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:12:13.304278  170754 start.go:141] virtualization: kvm guest
	I1026 15:12:13.306019  170754 out.go:179] * [pause-750553] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:12:13.307612  170754 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:12:13.307628  170754 notify.go:220] Checking for updates...
	I1026 15:12:13.311897  170754 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:12:13.312993  170754 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:12:13.314060  170754 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:12:13.315058  170754 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:12:13.315981  170754 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:12:13.317629  170754 config.go:182] Loaded profile config "pause-750553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:13.318234  170754 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:12:13.365658  170754 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 15:12:13.366595  170754 start.go:305] selected driver: kvm2
	I1026 15:12:13.366610  170754 start.go:925] validating driver "kvm2" against &{Name:pause-750553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-750553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:13.366738  170754 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:12:13.368062  170754 cni.go:84] Creating CNI manager for ""
	I1026 15:12:13.368126  170754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:12:13.368190  170754 start.go:349] cluster config:
	{Name:pause-750553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-750553 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:12:13.368343  170754 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:12:13.369664  170754 out.go:179] * Starting "pause-750553" primary control-plane node in "pause-750553" cluster
	I1026 15:12:13.370609  170754 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:12:13.370641  170754 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:12:13.370650  170754 cache.go:58] Caching tarball of preloaded images
	I1026 15:12:13.370747  170754 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:12:13.370764  170754 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:12:13.370903  170754 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/config.json ...
	I1026 15:12:13.371176  170754 start.go:360] acquireMachinesLock for pause-750553: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:12:13.371237  170754 start.go:364] duration metric: took 36.985µs to acquireMachinesLock for "pause-750553"
	I1026 15:12:13.371258  170754 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:12:13.371264  170754 fix.go:54] fixHost starting: 
	I1026 15:12:13.373260  170754 fix.go:112] recreateIfNeeded on pause-750553: state=Running err=<nil>
	W1026 15:12:13.373285  170754 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:12:13.374546  170754 out.go:252] * Updating the running kvm2 "pause-750553" VM ...
	I1026 15:12:13.374573  170754 machine.go:93] provisionDockerMachine start ...
	I1026 15:12:13.377189  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.377828  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:13.377875  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.378145  170754 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:13.378415  170754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1026 15:12:13.378434  170754 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:12:13.491699  170754 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-750553
	
	I1026 15:12:13.491734  170754 buildroot.go:166] provisioning hostname "pause-750553"
	I1026 15:12:13.495928  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.496586  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:13.496627  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.496873  170754 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:13.497180  170754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1026 15:12:13.497210  170754 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-750553 && echo "pause-750553" | sudo tee /etc/hostname
	I1026 15:12:13.637047  170754 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-750553
	
	I1026 15:12:13.640409  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.640926  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:13.640954  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.641246  170754 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:13.641556  170754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1026 15:12:13.641576  170754 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-750553' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-750553/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-750553' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:12:13.762831  170754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:12:13.762858  170754 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:12:13.762980  170754 buildroot.go:174] setting up certificates
	I1026 15:12:13.762995  170754 provision.go:84] configureAuth start
	I1026 15:12:13.766077  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.766584  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:13.766619  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.770086  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.770572  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:13.770603  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.771318  170754 provision.go:143] copyHostCerts
	I1026 15:12:13.771404  170754 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:12:13.771432  170754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:12:13.771528  170754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:12:13.771654  170754 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:12:13.771666  170754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:12:13.771692  170754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:12:13.771756  170754 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:12:13.771764  170754 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:12:13.771784  170754 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:12:13.771848  170754 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.pause-750553 san=[127.0.0.1 192.168.72.175 localhost minikube pause-750553]
	I1026 15:12:13.940438  170754 provision.go:177] copyRemoteCerts
	I1026 15:12:13.940587  170754 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:12:13.944084  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.944592  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:13.944629  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:13.944850  170754 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/pause-750553/id_rsa Username:docker}
	I1026 15:12:14.042147  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:12:14.090324  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 15:12:14.138690  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:12:14.183598  170754 provision.go:87] duration metric: took 420.583368ms to configureAuth
	I1026 15:12:14.183634  170754 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:12:14.183928  170754 config.go:182] Loaded profile config "pause-750553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:12:14.188323  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:14.188997  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:14.189095  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:14.189340  170754 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:14.189610  170754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1026 15:12:14.189626  170754 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:12:21.497847  170754 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:12:21.497879  170754 machine.go:96] duration metric: took 8.12329818s to provisionDockerMachine
	I1026 15:12:21.497895  170754 start.go:293] postStartSetup for "pause-750553" (driver="kvm2")
	I1026 15:12:21.497912  170754 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:12:21.498007  170754 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:12:21.501656  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.502217  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:21.502248  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.502430  170754 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/pause-750553/id_rsa Username:docker}
	I1026 15:12:21.587878  170754 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:12:21.594076  170754 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:12:21.594105  170754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:12:21.594182  170754 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:12:21.594291  170754 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:12:21.594426  170754 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:12:21.611263  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:12:21.642828  170754 start.go:296] duration metric: took 144.914284ms for postStartSetup
	I1026 15:12:21.642872  170754 fix.go:56] duration metric: took 8.271608928s for fixHost
	I1026 15:12:21.645832  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.646223  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:21.646249  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.646438  170754 main.go:141] libmachine: Using SSH client type: native
	I1026 15:12:21.646675  170754 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.72.175 22 <nil> <nil>}
	I1026 15:12:21.646689  170754 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:12:21.753037  170754 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761491541.748013643
	
	I1026 15:12:21.753059  170754 fix.go:216] guest clock: 1761491541.748013643
	I1026 15:12:21.753068  170754 fix.go:229] Guest: 2025-10-26 15:12:21.748013643 +0000 UTC Remote: 2025-10-26 15:12:21.642876534 +0000 UTC m=+8.407965287 (delta=105.137109ms)
	I1026 15:12:21.753091  170754 fix.go:200] guest clock delta is within tolerance: 105.137109ms
	I1026 15:12:21.753098  170754 start.go:83] releasing machines lock for "pause-750553", held for 8.381847696s
	I1026 15:12:21.756959  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.757400  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:21.757428  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.758038  170754 ssh_runner.go:195] Run: cat /version.json
	I1026 15:12:21.758133  170754 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:12:21.761877  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.762014  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.762332  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:21.762364  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.762390  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:12:21.762418  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:12:21.762565  170754 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/pause-750553/id_rsa Username:docker}
	I1026 15:12:21.762760  170754 sshutil.go:53] new ssh client: &{IP:192.168.72.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/pause-750553/id_rsa Username:docker}
	I1026 15:12:21.942766  170754 ssh_runner.go:195] Run: systemctl --version
	I1026 15:12:21.958924  170754 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:12:22.183554  170754 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:12:22.199398  170754 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:12:22.199519  170754 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:12:22.247826  170754 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 15:12:22.247873  170754 start.go:495] detecting cgroup driver to use...
	I1026 15:12:22.247967  170754 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:12:22.300580  170754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:12:22.328019  170754 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:12:22.328098  170754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:12:22.363782  170754 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:12:22.388601  170754 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:12:22.762823  170754 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:12:23.170061  170754 docker.go:234] disabling docker service ...
	I1026 15:12:23.170132  170754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:12:23.227493  170754 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:12:23.251258  170754 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:12:23.565718  170754 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:12:23.798039  170754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:12:23.827450  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:12:23.854611  170754 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:12:23.854697  170754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.867496  170754 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:12:23.867592  170754 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.881686  170754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.896691  170754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.916489  170754 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:12:23.935632  170754 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.950251  170754 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.966277  170754 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:12:23.985845  170754 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:12:24.014187  170754 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:12:24.029002  170754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:12:24.274863  170754 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:13:54.589162  170754 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.314255818s)
	I1026 15:13:54.589194  170754 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:13:54.589286  170754 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:13:54.596442  170754 start.go:563] Will wait 60s for crictl version
	I1026 15:13:54.596519  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:13:54.602041  170754 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:13:54.640079  170754 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:13:54.640159  170754 ssh_runner.go:195] Run: crio --version
	I1026 15:13:54.672051  170754 ssh_runner.go:195] Run: crio --version
	I1026 15:13:54.716574  170754 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:13:54.720936  170754 main.go:141] libmachine: domain pause-750553 has defined MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:13:54.721354  170754 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:a1:2e", ip: ""} in network mk-pause-750553: {Iface:virbr4 ExpiryTime:2025-10-26 16:11:13 +0000 UTC Type:0 Mac:52:54:00:42:a1:2e Iaid: IPaddr:192.168.72.175 Prefix:24 Hostname:pause-750553 Clientid:01:52:54:00:42:a1:2e}
	I1026 15:13:54.721385  170754 main.go:141] libmachine: domain pause-750553 has defined IP address 192.168.72.175 and MAC address 52:54:00:42:a1:2e in network mk-pause-750553
	I1026 15:13:54.721696  170754 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1026 15:13:54.727181  170754 kubeadm.go:883] updating cluster {Name:pause-750553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-750553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:13:54.727391  170754 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:13:54.727480  170754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:54.782648  170754 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:54.782683  170754 crio.go:433] Images already preloaded, skipping extraction
	I1026 15:13:54.782763  170754 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:13:54.822968  170754 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:13:54.822998  170754 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:13:54.823007  170754 kubeadm.go:934] updating node { 192.168.72.175 8443 v1.34.1 crio true true} ...
	I1026 15:13:54.823128  170754 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-750553 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-750553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:13:54.823203  170754 ssh_runner.go:195] Run: crio config
	I1026 15:13:54.892147  170754 cni.go:84] Creating CNI manager for ""
	I1026 15:13:54.892183  170754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:13:54.892222  170754 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:13:54.892259  170754 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.175 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-750553 NodeName:pause-750553 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:13:54.892536  170754 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-750553"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.175"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.175"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:13:54.892642  170754 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:13:54.909908  170754 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:13:54.910013  170754 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:13:54.925443  170754 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1026 15:13:54.954140  170754 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:13:54.977516  170754 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:13:55.000567  170754 ssh_runner.go:195] Run: grep 192.168.72.175	control-plane.minikube.internal$ /etc/hosts
	I1026 15:13:55.004992  170754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:13:55.188928  170754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:13:55.209508  170754 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553 for IP: 192.168.72.175
	I1026 15:13:55.209540  170754 certs.go:195] generating shared ca certs ...
	I1026 15:13:55.209561  170754 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:13:55.209775  170754 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:13:55.209847  170754 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:13:55.209859  170754 certs.go:257] generating profile certs ...
	I1026 15:13:55.209962  170754 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/client.key
	I1026 15:13:55.210070  170754 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/apiserver.key.aac75c18
	I1026 15:13:55.210138  170754 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/proxy-client.key
	I1026 15:13:55.210316  170754 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:13:55.210361  170754 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:13:55.210369  170754 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:13:55.210409  170754 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:13:55.210431  170754 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:13:55.210471  170754 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:13:55.210534  170754 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:13:55.211351  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:13:55.245735  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:13:55.276697  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:13:55.312814  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:13:55.345953  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 15:13:55.384907  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:13:55.422690  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:13:55.456855  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:13:55.491124  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:13:55.524484  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:13:55.562543  170754 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:13:55.602391  170754 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:13:55.628901  170754 ssh_runner.go:195] Run: openssl version
	I1026 15:13:55.636872  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:13:55.654278  170754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.659225  170754 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.659289  170754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:13:55.667735  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:13:55.683078  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:13:55.696208  170754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:13:55.702821  170754 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:13:55.702899  170754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:13:55.711309  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:13:55.724070  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:13:55.738225  170754 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:13:55.744118  170754 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:13:55.744175  170754 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:13:55.752334  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:13:55.764972  170754 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:13:55.770414  170754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:13:55.778427  170754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:13:55.786752  170754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:13:55.795169  170754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:13:55.805352  170754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:13:55.814130  170754 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:13:55.822956  170754 kubeadm.go:400] StartCluster: {Name:pause-750553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-750553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:13:55.823134  170754 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:13:55.823200  170754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:13:55.874431  170754 cri.go:89] found id: "a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:13:55.874486  170754 cri.go:89] found id: "4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:13:55.874493  170754 cri.go:89] found id: "7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:13:55.874498  170754 cri.go:89] found id: "ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:13:55.874502  170754 cri.go:89] found id: "332b05b5dd2dbc70b26f053cf91a7ed45b35de9991b8bb83ccdcce113d47c422"
	I1026 15:13:55.874506  170754 cri.go:89] found id: "166f2eb89b33cbd862e08c281e6e5576f802f0b86641d1002d11841d6e9174ad"
	I1026 15:13:55.874511  170754 cri.go:89] found id: "c46275e6d785b2d85c40cc27654501a1b3f062c629be0be58289dbcdc520693c"
	I1026 15:13:55.874533  170754 cri.go:89] found id: "d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:13:55.874539  170754 cri.go:89] found id: "c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:13:55.874549  170754 cri.go:89] found id: "96f9dd4daa8742de34d06dfab7a20c8447e2af9c5cad17c8fdbfd909e81e3a02"
	I1026 15:13:55.874553  170754 cri.go:89] found id: ""
	I1026 15:13:55.874598  170754 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-750553 -n pause-750553
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-750553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-750553 logs -n 25: (1.108944537s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                         │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-961864 sudo systemctl status kubelet --all --full --no-pager                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl cat kubelet --no-pager                                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/kubernetes/kubelet.conf                                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /var/lib/kubelet/config.yaml                                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl status docker --all --full --no-pager                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl cat docker --no-pager                                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/docker/daemon.json                                                                                                    │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo docker system info                                                                                                             │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl status cri-docker --all --full --no-pager                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl cat cri-docker --no-pager                                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                       │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                 │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cri-dockerd --version                                                                                                          │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl status containerd --all --full --no-pager                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl cat containerd --no-pager                                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /lib/systemd/system/containerd.service                                                                                     │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/containerd/config.toml                                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo containerd config dump                                                                                                         │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl status crio --all --full --no-pager                                                                                  │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl cat crio --no-pager                                                                                                  │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                        │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo crio config                                                                                                                    │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p bridge-961864                                                                                                                                     │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ start   │ -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-163393 │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:17:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:17:51.147910  178853 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:17:51.148197  178853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:17:51.148215  178853 out.go:374] Setting ErrFile to fd 2...
	I1026 15:17:51.148220  178853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:17:51.148403  178853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:17:51.148903  178853 out.go:368] Setting JSON to false
	I1026 15:17:51.149878  178853 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7205,"bootTime":1761484666,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:17:51.149977  178853 start.go:141] virtualization: kvm guest
	I1026 15:17:51.151763  178853 out.go:179] * [embed-certs-163393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:17:51.153230  178853 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:17:51.153272  178853 notify.go:220] Checking for updates...
	I1026 15:17:51.155573  178853 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:17:51.156759  178853 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:17:51.158131  178853 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:17:51.159352  178853 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:17:51.160377  178853 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:17:51.162001  178853 config.go:182] Loaded profile config "no-preload-758002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:51.162148  178853 config.go:182] Loaded profile config "old-k8s-version-065983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:17:51.162319  178853 config.go:182] Loaded profile config "pause-750553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:51.162435  178853 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:17:51.203597  178853 out.go:179] * Using the kvm2 driver based on user configuration
	I1026 15:17:51.204531  178853 start.go:305] selected driver: kvm2
	I1026 15:17:51.204546  178853 start.go:925] validating driver "kvm2" against <nil>
	I1026 15:17:51.204558  178853 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:17:51.205289  178853 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:17:51.205575  178853 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:17:51.205602  178853 cni.go:84] Creating CNI manager for ""
	I1026 15:17:51.205666  178853 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:17:51.205677  178853 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 15:17:51.205744  178853 start.go:349] cluster config:
	{Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:17:51.205889  178853 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:17:51.207130  178853 out.go:179] * Starting "embed-certs-163393" primary control-plane node in "embed-certs-163393" cluster
	I1026 15:17:51.207973  178853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:17:51.208005  178853 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:17:51.208015  178853 cache.go:58] Caching tarball of preloaded images
	I1026 15:17:51.208098  178853 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:17:51.208110  178853 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:17:51.208185  178853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/config.json ...
	I1026 15:17:51.208201  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/config.json: {Name:mkdb48ff5a82f3eb9f8a31e51d858377286df427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:51.208332  178853 start.go:360] acquireMachinesLock for embed-certs-163393: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:17:51.208360  178853 start.go:364] duration metric: took 15.048µs to acquireMachinesLock for "embed-certs-163393"
	I1026 15:17:51.208377  178853 start.go:93] Provisioning new machine with config: &{Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:17:51.208425  178853 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 15:17:48.698138  177820 out.go:252]   - Generating certificates and keys ...
	I1026 15:17:48.698256  177820 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:17:48.698362  177820 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:17:49.158713  177820 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:17:49.541078  177820 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:17:49.623791  177820 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:17:49.795012  177820 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:17:50.265370  177820 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:17:50.265638  177820 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-758002] and IPs [192.168.50.112 127.0.0.1 ::1]
	I1026 15:17:50.329991  177820 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:17:50.330222  177820 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-758002] and IPs [192.168.50.112 127.0.0.1 ::1]
	I1026 15:17:50.409591  177820 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:17:50.535871  177820 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:17:50.650659  177820 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:17:50.650762  177820 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:17:50.863171  177820 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:17:51.185589  177820 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:17:51.355852  177820 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:17:51.428943  177820 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:17:51.491351  177820 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:17:51.492399  177820 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:17:51.494791  177820 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:17:51.499594  177820 out.go:252]   - Booting up control plane ...
	I1026 15:17:51.499744  177820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:17:51.499859  177820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:17:51.499983  177820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:17:51.518034  177820 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:17:51.518182  177820 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:17:51.525893  177820 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:17:51.526119  177820 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:17:51.526210  177820 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:17:51.747496  177820 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:17:51.747668  177820 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:17:52.751876  177820 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003791326s
	I1026 15:17:52.768370  177820 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:17:52.769299  177820 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.112:8443/livez
	I1026 15:17:52.769448  177820 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:17:52.769578  177820 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:17:48.346173  170754 logs.go:123] Gathering logs for dmesg ...
	I1026 15:17:48.346208  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:17:48.364392  170754 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:17:48.364423  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:17:48.448657  170754 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:17:48.448677  170754 logs.go:123] Gathering logs for etcd [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922] ...
	I1026 15:17:48.448691  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:48.504353  170754 logs.go:123] Gathering logs for etcd [7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9] ...
	I1026 15:17:48.504398  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:48.555160  170754 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:17:48.555211  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:17:51.413574  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:17:51.437138  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:17:51.437209  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:17:51.486105  170754 cri.go:89] found id: "c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:51.486141  170754 cri.go:89] found id: ""
	I1026 15:17:51.486155  170754 logs.go:282] 1 containers: [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8]
	I1026 15:17:51.486228  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.490771  170754 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:17:51.490862  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:17:51.542332  170754 cri.go:89] found id: "bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:51.542362  170754 cri.go:89] found id: "7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:51.542369  170754 cri.go:89] found id: ""
	I1026 15:17:51.542380  170754 logs.go:282] 2 containers: [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9]
	I1026 15:17:51.542470  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.547814  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.553515  170754 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:17:51.553584  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:17:51.599839  170754 cri.go:89] found id: "a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:51.599865  170754 cri.go:89] found id: ""
	I1026 15:17:51.599873  170754 logs.go:282] 1 containers: [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de]
	I1026 15:17:51.599930  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.604575  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:17:51.604633  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:17:51.645484  170754 cri.go:89] found id: "f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:51.645512  170754 cri.go:89] found id: "ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:51.645518  170754 cri.go:89] found id: ""
	I1026 15:17:51.645529  170754 logs.go:282] 2 containers: [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c]
	I1026 15:17:51.645600  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.650153  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.655044  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:17:51.655091  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:17:51.694776  170754 cri.go:89] found id: "4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:51.694804  170754 cri.go:89] found id: ""
	I1026 15:17:51.694815  170754 logs.go:282] 1 containers: [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f]
	I1026 15:17:51.694884  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.700514  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:17:51.700581  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:17:51.741646  170754 cri.go:89] found id: "d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:51.741681  170754 cri.go:89] found id: ""
	I1026 15:17:51.741695  170754 logs.go:282] 1 containers: [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5]
	I1026 15:17:51.741764  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.747517  170754 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:17:51.747585  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:17:51.798072  170754 cri.go:89] found id: ""
	I1026 15:17:51.798115  170754 logs.go:282] 0 containers: []
	W1026 15:17:51.798139  170754 logs.go:284] No container was found matching "kindnet"
	I1026 15:17:51.798163  170754 logs.go:123] Gathering logs for kube-proxy [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f] ...
	I1026 15:17:51.798185  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:51.841651  170754 logs.go:123] Gathering logs for kube-controller-manager [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5] ...
	I1026 15:17:51.841684  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:51.893207  170754 logs.go:123] Gathering logs for container status ...
	I1026 15:17:51.893250  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:17:51.939864  170754 logs.go:123] Gathering logs for dmesg ...
	I1026 15:17:51.939900  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:17:51.955512  170754 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:17:51.955541  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:17:52.032163  170754 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:17:52.032189  170754 logs.go:123] Gathering logs for kube-apiserver [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8] ...
	I1026 15:17:52.032205  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:52.098688  170754 logs.go:123] Gathering logs for etcd [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922] ...
	I1026 15:17:52.098726  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:52.143993  170754 logs.go:123] Gathering logs for coredns [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de] ...
	I1026 15:17:52.144026  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:52.181622  170754 logs.go:123] Gathering logs for kube-scheduler [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148] ...
	I1026 15:17:52.181655  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:52.257644  170754 logs.go:123] Gathering logs for kube-scheduler [ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c] ...
	I1026 15:17:52.257680  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:52.296652  170754 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:17:52.296681  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:17:52.580982  170754 logs.go:123] Gathering logs for kubelet ...
	I1026 15:17:52.581013  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:17:52.672764  170754 logs.go:123] Gathering logs for etcd [7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9] ...
	I1026 15:17:52.672801  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	W1026 15:17:52.198631  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:17:54.200283  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:17:51.209683  178853 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1026 15:17:51.209848  178853 start.go:159] libmachine.API.Create for "embed-certs-163393" (driver="kvm2")
	I1026 15:17:51.209883  178853 client.go:168] LocalClient.Create starting
	I1026 15:17:51.209935  178853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem
	I1026 15:17:51.209970  178853 main.go:141] libmachine: Decoding PEM data...
	I1026 15:17:51.209982  178853 main.go:141] libmachine: Parsing certificate...
	I1026 15:17:51.210052  178853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem
	I1026 15:17:51.210079  178853 main.go:141] libmachine: Decoding PEM data...
	I1026 15:17:51.210095  178853 main.go:141] libmachine: Parsing certificate...
	I1026 15:17:51.210390  178853 main.go:141] libmachine: creating domain...
	I1026 15:17:51.210403  178853 main.go:141] libmachine: creating network...
	I1026 15:17:51.211843  178853 main.go:141] libmachine: found existing default network
	I1026 15:17:51.212077  178853 main.go:141] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 15:17:51.213120  178853 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bc2c60}
	I1026 15:17:51.213192  178853 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-embed-certs-163393</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 15:17:51.218283  178853 main.go:141] libmachine: creating private network mk-embed-certs-163393 192.168.39.0/24...
	I1026 15:17:51.287179  178853 main.go:141] libmachine: private network mk-embed-certs-163393 192.168.39.0/24 created
	I1026 15:17:51.287491  178853 main.go:141] libmachine: <network>
	  <name>mk-embed-certs-163393</name>
	  <uuid>d35dbd72-8087-471b-8adf-d60064f596c2</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:ea:64:02'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 15:17:51.287523  178853 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393 ...
	I1026 15:17:51.287554  178853 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1026 15:17:51.287568  178853 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:17:51.287654  178853 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21664-137233/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1026 15:17:51.623041  178853 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa...
	I1026 15:17:51.910503  178853 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/embed-certs-163393.rawdisk...
	I1026 15:17:51.910549  178853 main.go:141] libmachine: Writing magic tar header
	I1026 15:17:51.910591  178853 main.go:141] libmachine: Writing SSH key tar header
	I1026 15:17:51.910713  178853 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393 ...
	I1026 15:17:51.910818  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393
	I1026 15:17:51.910872  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393 (perms=drwx------)
	I1026 15:17:51.910901  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube/machines
	I1026 15:17:51.910923  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube/machines (perms=drwxr-xr-x)
	I1026 15:17:51.910947  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:17:51.910962  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube (perms=drwxr-xr-x)
	I1026 15:17:51.910979  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233
	I1026 15:17:51.911028  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233 (perms=drwxrwxr-x)
	I1026 15:17:51.911065  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1026 15:17:51.911084  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 15:17:51.911103  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1026 15:17:51.911134  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 15:17:51.911155  178853 main.go:141] libmachine: checking permissions on dir: /home
	I1026 15:17:51.911172  178853 main.go:141] libmachine: skipping /home - not owner
	I1026 15:17:51.911183  178853 main.go:141] libmachine: defining domain...
	I1026 15:17:51.912826  178853 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>embed-certs-163393</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/embed-certs-163393.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-embed-certs-163393'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:17:51.927139  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:d6:fa:42 in network default
	I1026 15:17:51.928070  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:51.928094  178853 main.go:141] libmachine: starting domain...
	I1026 15:17:51.928102  178853 main.go:141] libmachine: ensuring networks are active...
	I1026 15:17:51.929107  178853 main.go:141] libmachine: Ensuring network default is active
	I1026 15:17:51.929723  178853 main.go:141] libmachine: Ensuring network mk-embed-certs-163393 is active
	I1026 15:17:51.930517  178853 main.go:141] libmachine: getting domain XML...
	I1026 15:17:51.931944  178853 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>embed-certs-163393</name>
	  <uuid>31c0eca2-26a4-41c9-a6df-d975a508de47</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/embed-certs-163393.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:bb:5d:75'/>
	      <source network='mk-embed-certs-163393'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d6:fa:42'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:17:53.405657  178853 main.go:141] libmachine: waiting for domain to start...
	I1026 15:17:53.407176  178853 main.go:141] libmachine: domain is now running
	I1026 15:17:53.407191  178853 main.go:141] libmachine: waiting for IP...
	I1026 15:17:53.408058  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:53.408911  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:53.408932  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:53.409391  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:53.409485  178853 retry.go:31] will retry after 224.378935ms: waiting for domain to come up
	I1026 15:17:53.636367  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:53.637347  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:53.637371  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:53.637830  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:53.637878  178853 retry.go:31] will retry after 370.25291ms: waiting for domain to come up
	I1026 15:17:54.009733  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:54.010685  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:54.010706  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:54.011127  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:54.011197  178853 retry.go:31] will retry after 386.092672ms: waiting for domain to come up
	I1026 15:17:54.398647  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:54.399486  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:54.399507  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:54.399943  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:54.399985  178853 retry.go:31] will retry after 586.427877ms: waiting for domain to come up
	I1026 15:17:54.987961  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:54.989040  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:54.989066  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:54.989529  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:54.989573  178853 retry.go:31] will retry after 576.503336ms: waiting for domain to come up
	I1026 15:17:55.567671  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:55.568615  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:55.568638  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:55.569117  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:55.569174  178853 retry.go:31] will retry after 890.583074ms: waiting for domain to come up
	I1026 15:17:56.439540  177820 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.669618565s
	I1026 15:17:57.141896  177820 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.373548346s
	I1026 15:17:55.231602  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:17:55.254037  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:17:55.254103  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:17:55.309992  170754 cri.go:89] found id: "c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:55.310025  170754 cri.go:89] found id: ""
	I1026 15:17:55.310036  170754 logs.go:282] 1 containers: [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8]
	I1026 15:17:55.310099  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.315732  170754 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:17:55.315799  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:17:55.362009  170754 cri.go:89] found id: "bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:55.362032  170754 cri.go:89] found id: "7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:55.362037  170754 cri.go:89] found id: ""
	I1026 15:17:55.362046  170754 logs.go:282] 2 containers: [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9]
	I1026 15:17:55.362112  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.367502  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.373256  170754 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:17:55.373343  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:17:55.417093  170754 cri.go:89] found id: "a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:55.417121  170754 cri.go:89] found id: ""
	I1026 15:17:55.417134  170754 logs.go:282] 1 containers: [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de]
	I1026 15:17:55.417208  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.422018  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:17:55.422091  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:17:55.470543  170754 cri.go:89] found id: "f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:55.470612  170754 cri.go:89] found id: "ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:55.470621  170754 cri.go:89] found id: ""
	I1026 15:17:55.470646  170754 logs.go:282] 2 containers: [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c]
	I1026 15:17:55.470733  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.475957  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.480694  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:17:55.480757  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:17:55.521988  170754 cri.go:89] found id: "4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:55.522016  170754 cri.go:89] found id: ""
	I1026 15:17:55.522028  170754 logs.go:282] 1 containers: [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f]
	I1026 15:17:55.522095  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.527353  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:17:55.527430  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:17:55.570588  170754 cri.go:89] found id: "d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:55.570609  170754 cri.go:89] found id: ""
	I1026 15:17:55.570620  170754 logs.go:282] 1 containers: [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5]
	I1026 15:17:55.570695  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.575654  170754 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:17:55.575745  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:17:55.620226  170754 cri.go:89] found id: ""
	I1026 15:17:55.620260  170754 logs.go:282] 0 containers: []
	W1026 15:17:55.620281  170754 logs.go:284] No container was found matching "kindnet"
	I1026 15:17:55.620295  170754 logs.go:123] Gathering logs for kube-controller-manager [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5] ...
	I1026 15:17:55.620322  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:55.673841  170754 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:17:55.673881  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:17:56.013750  170754 logs.go:123] Gathering logs for dmesg ...
	I1026 15:17:56.013789  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:17:56.033300  170754 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:17:56.033349  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:17:56.120012  170754 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:17:56.120039  170754 logs.go:123] Gathering logs for kube-apiserver [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8] ...
	I1026 15:17:56.120056  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:56.198143  170754 logs.go:123] Gathering logs for kube-scheduler [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148] ...
	I1026 15:17:56.198207  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:56.294215  170754 logs.go:123] Gathering logs for container status ...
	I1026 15:17:56.294263  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:17:56.352990  170754 logs.go:123] Gathering logs for kubelet ...
	I1026 15:17:56.353021  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:17:56.441490  170754 logs.go:123] Gathering logs for etcd [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922] ...
	I1026 15:17:56.441523  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:56.507047  170754 logs.go:123] Gathering logs for etcd [7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9] ...
	I1026 15:17:56.507098  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:56.561486  170754 logs.go:123] Gathering logs for coredns [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de] ...
	I1026 15:17:56.561533  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:56.601947  170754 logs.go:123] Gathering logs for kube-scheduler [ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c] ...
	I1026 15:17:56.601989  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:56.648854  170754 logs.go:123] Gathering logs for kube-proxy [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f] ...
	I1026 15:17:56.648899  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:59.411216  177820 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.640492108s
	I1026 15:17:59.596071  177820 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:17:59.618015  177820 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:17:59.633901  177820 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:17:59.634187  177820 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-758002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:17:59.649595  177820 kubeadm.go:318] [bootstrap-token] Using token: lwo38u.ix7if2n07d2aqidw
	W1026 15:17:56.202647  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:17:58.701803  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:17:59.650769  177820 out.go:252]   - Configuring RBAC rules ...
	I1026 15:17:59.650913  177820 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:17:59.673874  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:17:59.689322  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:17:59.696082  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:17:59.705622  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:17:59.709899  177820 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:17:59.818884  177820 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:00.272272  177820 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:00.818848  177820 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:00.819729  177820 kubeadm.go:318] 
	I1026 15:18:00.819825  177820 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:00.819840  177820 kubeadm.go:318] 
	I1026 15:18:00.819936  177820 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:00.819950  177820 kubeadm.go:318] 
	I1026 15:18:00.820021  177820 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:00.820126  177820 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:00.820226  177820 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:00.820238  177820 kubeadm.go:318] 
	I1026 15:18:00.820313  177820 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:00.820323  177820 kubeadm.go:318] 
	I1026 15:18:00.820386  177820 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:00.820401  177820 kubeadm.go:318] 
	I1026 15:18:00.820522  177820 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:00.820651  177820 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:00.820759  177820 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:00.820775  177820 kubeadm.go:318] 
	I1026 15:18:00.820885  177820 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:00.821004  177820 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:00.821023  177820 kubeadm.go:318] 
	I1026 15:18:00.821143  177820 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lwo38u.ix7if2n07d2aqidw \
	I1026 15:18:00.821298  177820 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 15:18:00.821324  177820 kubeadm.go:318] 	--control-plane 
	I1026 15:18:00.821329  177820 kubeadm.go:318] 
	I1026 15:18:00.821486  177820 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:00.821505  177820 kubeadm.go:318] 
	I1026 15:18:00.821609  177820 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lwo38u.ix7if2n07d2aqidw \
	I1026 15:18:00.821744  177820 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 15:18:00.822835  177820 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:00.822865  177820 cni.go:84] Creating CNI manager for ""
	I1026 15:18:00.822875  177820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:00.824297  177820 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:17:56.461810  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:56.462883  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:56.462910  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:56.463427  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:56.463504  178853 retry.go:31] will retry after 740.368024ms: waiting for domain to come up
	I1026 15:17:57.205445  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:57.206382  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:57.206407  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:57.206864  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:57.206917  178853 retry.go:31] will retry after 1.267858294s: waiting for domain to come up
	I1026 15:17:58.476314  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:58.477112  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:58.477129  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:58.477577  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:58.477632  178853 retry.go:31] will retry after 1.679056083s: waiting for domain to come up
	I1026 15:18:00.158806  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:00.159519  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:18:00.159538  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:18:00.159928  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:18:00.159974  178853 retry.go:31] will retry after 2.179695277s: waiting for domain to come up
	I1026 15:18:00.825478  177820 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:18:00.838309  177820 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:18:00.859563  177820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:00.859662  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:00.859715  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-758002 minikube.k8s.io/updated_at=2025_10_26T15_18_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=no-preload-758002 minikube.k8s.io/primary=true
	I1026 15:18:00.917438  177820 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:01.014804  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:01.515750  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:02.015701  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:02.515699  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:59.187857  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:17:59.208023  170754 kubeadm.go:601] duration metric: took 4m3.266578421s to restartPrimaryControlPlane
	W1026 15:17:59.208109  170754 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1026 15:17:59.208180  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 15:18:02.046399  170754 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.838193207s)
	I1026 15:18:02.046485  170754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:02.067669  170754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:18:02.085002  170754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:18:02.101237  170754 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:18:02.101276  170754 kubeadm.go:157] found existing configuration files:
	
	I1026 15:18:02.101339  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:18:02.114384  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:18:02.114485  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:18:02.126396  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:18:02.142701  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:18:02.142792  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:18:02.155147  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:18:02.169189  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:18:02.169273  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:18:02.186883  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:18:02.201291  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:18:02.201381  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:18:02.220302  170754 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 15:18:02.379018  170754 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:03.014836  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:03.515501  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:04.015755  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:04.515602  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:05.015474  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:05.102329  177820 kubeadm.go:1113] duration metric: took 4.24274723s to wait for elevateKubeSystemPrivileges
	I1026 15:18:05.102388  177820 kubeadm.go:402] duration metric: took 16.81057756s to StartCluster
	I1026 15:18:05.102418  177820 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:05.102533  177820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:18:05.103801  177820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:05.104090  177820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:18:05.104111  177820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:05.104222  177820 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:05.104314  177820 addons.go:69] Setting storage-provisioner=true in profile "no-preload-758002"
	I1026 15:18:05.104341  177820 addons.go:238] Setting addon storage-provisioner=true in "no-preload-758002"
	I1026 15:18:05.104345  177820 addons.go:69] Setting default-storageclass=true in profile "no-preload-758002"
	I1026 15:18:05.104363  177820 config.go:182] Loaded profile config "no-preload-758002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:05.104372  177820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-758002"
	I1026 15:18:05.104377  177820 host.go:66] Checking if "no-preload-758002" exists ...
	I1026 15:18:05.106028  177820 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:05.107294  177820 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1026 15:18:01.199409  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:03.199971  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:05.201417  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:02.342116  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:02.343072  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:18:02.343096  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:18:02.343607  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:18:02.343666  178853 retry.go:31] will retry after 2.620685962s: waiting for domain to come up
	I1026 15:18:04.966947  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:04.967706  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:18:04.967730  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:18:04.968173  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:18:04.968232  178853 retry.go:31] will retry after 2.927688766s: waiting for domain to come up
	I1026 15:18:05.107313  177820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:05.108568  177820 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:05.108589  177820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:05.109032  177820 addons.go:238] Setting addon default-storageclass=true in "no-preload-758002"
	I1026 15:18:05.109083  177820 host.go:66] Checking if "no-preload-758002" exists ...
	I1026 15:18:05.112160  177820 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:05.112183  177820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:05.114418  177820 main.go:141] libmachine: domain no-preload-758002 has defined MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.115313  177820 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:29:ca", ip: ""} in network mk-no-preload-758002: {Iface:virbr2 ExpiryTime:2025-10-26 16:17:22 +0000 UTC Type:0 Mac:52:54:00:4b:29:ca Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:no-preload-758002 Clientid:01:52:54:00:4b:29:ca}
	I1026 15:18:05.115356  177820 main.go:141] libmachine: domain no-preload-758002 has defined IP address 192.168.50.112 and MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.115835  177820 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/no-preload-758002/id_rsa Username:docker}
	I1026 15:18:05.117320  177820 main.go:141] libmachine: domain no-preload-758002 has defined MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.117928  177820 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:29:ca", ip: ""} in network mk-no-preload-758002: {Iface:virbr2 ExpiryTime:2025-10-26 16:17:22 +0000 UTC Type:0 Mac:52:54:00:4b:29:ca Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:no-preload-758002 Clientid:01:52:54:00:4b:29:ca}
	I1026 15:18:05.117969  177820 main.go:141] libmachine: domain no-preload-758002 has defined IP address 192.168.50.112 and MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.118214  177820 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/no-preload-758002/id_rsa Username:docker}
	I1026 15:18:05.431950  177820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:18:05.553934  177820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:05.830488  177820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:05.854226  177820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:06.070410  177820 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1026 15:18:06.071921  177820 node_ready.go:35] waiting up to 6m0s for node "no-preload-758002" to be "Ready" ...
	I1026 15:18:06.096721  177820 node_ready.go:49] node "no-preload-758002" is "Ready"
	I1026 15:18:06.096760  177820 node_ready.go:38] duration metric: took 24.803858ms for node "no-preload-758002" to be "Ready" ...
	I1026 15:18:06.096778  177820 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:06.096857  177820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:06.382308  177820 api_server.go:72] duration metric: took 1.278142223s to wait for apiserver process to appear ...
	I1026 15:18:06.382343  177820 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:06.382367  177820 api_server.go:253] Checking apiserver healthz at https://192.168.50.112:8443/healthz ...
	I1026 15:18:06.383727  177820 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 15:18:06.384867  177820 addons.go:514] duration metric: took 1.280668718s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 15:18:06.391351  177820 api_server.go:279] https://192.168.50.112:8443/healthz returned 200:
	ok
	I1026 15:18:06.392537  177820 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:06.392562  177820 api_server.go:131] duration metric: took 10.211098ms to wait for apiserver health ...
	I1026 15:18:06.392572  177820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:06.398409  177820 system_pods.go:59] 8 kube-system pods found
	I1026 15:18:06.398440  177820 system_pods.go:61] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.398448  177820 system_pods.go:61] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.398477  177820 system_pods.go:61] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.398486  177820 system_pods.go:61] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.398494  177820 system_pods.go:61] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.398506  177820 system_pods.go:61] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.398514  177820 system_pods.go:61] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.398523  177820 system_pods.go:61] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending
	I1026 15:18:06.398533  177820 system_pods.go:74] duration metric: took 5.953806ms to wait for pod list to return data ...
	I1026 15:18:06.398544  177820 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:06.402139  177820 default_sa.go:45] found service account: "default"
	I1026 15:18:06.402161  177820 default_sa.go:55] duration metric: took 3.61016ms for default service account to be created ...
	I1026 15:18:06.402186  177820 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:06.405249  177820 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:06.405274  177820 system_pods.go:89] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.405281  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.405288  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.405295  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.405300  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.405307  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.405314  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.405323  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:18:06.405338  177820 retry.go:31] will retry after 188.566274ms: missing components: kube-dns, kube-proxy
	I1026 15:18:06.575702  177820 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-758002" context rescaled to 1 replicas
	I1026 15:18:06.598820  177820 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:06.598869  177820 system_pods.go:89] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.598882  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.598896  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.598909  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.598918  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.598926  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.598935  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.598944  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:18:06.598975  177820 retry.go:31] will retry after 254.88108ms: missing components: kube-dns, kube-proxy
	I1026 15:18:06.858628  177820 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:06.858675  177820 system_pods.go:89] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.858686  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.858696  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.858705  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.858714  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.858732  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.858744  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.858755  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:18:06.858778  177820 retry.go:31] will retry after 476.19811ms: missing components: kube-dns, kube-proxy
	I1026 15:18:07.339415  177820 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:07.339474  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:07.339491  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:07.339503  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:07.339511  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:07.339520  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:07.339528  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:07.339534  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Running
	I1026 15:18:07.339554  177820 retry.go:31] will retry after 432.052198ms: missing components: kube-dns, kube-proxy
	I1026 15:18:07.777911  177820 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:07.777977  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:07.777988  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:07.778002  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:07.778015  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:07.778024  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:07.778043  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:07.778049  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Running
	I1026 15:18:07.778069  177820 retry.go:31] will retry after 696.721573ms: missing components: kube-dns, kube-proxy
	W1026 15:18:07.201562  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:09.698745  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:07.897885  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:07.899209  178853 main.go:141] libmachine: domain embed-certs-163393 has current primary IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:07.899248  178853 main.go:141] libmachine: found domain IP: 192.168.39.103
	I1026 15:18:07.899260  178853 main.go:141] libmachine: reserving static IP address...
	I1026 15:18:07.899781  178853 main.go:141] libmachine: unable to find host DHCP lease matching {name: "embed-certs-163393", mac: "52:54:00:bb:5d:75", ip: "192.168.39.103"} in network mk-embed-certs-163393
	I1026 15:18:08.108414  178853 main.go:141] libmachine: reserved static IP address 192.168.39.103 for domain embed-certs-163393
	I1026 15:18:08.108438  178853 main.go:141] libmachine: waiting for SSH...
	I1026 15:18:08.108444  178853 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:18:08.112076  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.112657  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.112701  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.112914  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.113305  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.113321  178853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:18:08.234042  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:08.234519  178853 main.go:141] libmachine: domain creation complete
	I1026 15:18:08.236407  178853 machine.go:93] provisionDockerMachine start ...
	I1026 15:18:08.239152  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.239638  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.239671  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.239924  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.240165  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.240179  178853 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:18:08.359836  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:18:08.359873  178853 buildroot.go:166] provisioning hostname "embed-certs-163393"
	I1026 15:18:08.363827  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.364479  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.364523  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.364782  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.365068  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.365094  178853 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-163393 && echo "embed-certs-163393" | sudo tee /etc/hostname
	I1026 15:18:08.508692  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-163393
	
	I1026 15:18:08.512839  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.513430  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.513471  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.513717  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.514015  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.514043  178853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-163393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-163393/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-163393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:18:08.642566  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:08.642599  178853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:18:08.642632  178853 buildroot.go:174] setting up certificates
	I1026 15:18:08.642650  178853 provision.go:84] configureAuth start
	I1026 15:18:08.645882  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.646333  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.646360  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.648965  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.649438  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.649473  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.649631  178853 provision.go:143] copyHostCerts
	I1026 15:18:08.649702  178853 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:18:08.649723  178853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:18:08.649833  178853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:18:08.649954  178853 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:18:08.649963  178853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:18:08.649995  178853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:18:08.650076  178853 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:18:08.650084  178853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:18:08.650108  178853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:18:08.650170  178853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.embed-certs-163393 san=[127.0.0.1 192.168.39.103 embed-certs-163393 localhost minikube]
	I1026 15:18:09.206370  178853 provision.go:177] copyRemoteCerts
	I1026 15:18:09.206435  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:18:09.209544  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.210016  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.210042  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.210207  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:09.301100  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:18:09.336044  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:18:09.368637  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:18:09.402330  178853 provision.go:87] duration metric: took 759.662052ms to configureAuth
	I1026 15:18:09.402359  178853 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:18:09.402622  178853 config.go:182] Loaded profile config "embed-certs-163393": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:09.405912  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.406391  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.406424  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.406664  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:09.406876  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:09.406893  178853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:18:09.668208  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:18:09.668239  178853 machine.go:96] duration metric: took 1.431810212s to provisionDockerMachine
	I1026 15:18:09.668253  178853 client.go:171] duration metric: took 18.458362485s to LocalClient.Create
	I1026 15:18:09.668275  178853 start.go:167] duration metric: took 18.458425077s to libmachine.API.Create "embed-certs-163393"
	I1026 15:18:09.668284  178853 start.go:293] postStartSetup for "embed-certs-163393" (driver="kvm2")
	I1026 15:18:09.668297  178853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:18:09.668373  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:18:09.671598  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.672035  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.672063  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.672202  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:09.763653  178853 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:18:09.769383  178853 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:18:09.769425  178853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:18:09.769505  178853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:18:09.769600  178853 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:18:09.769741  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:18:09.787891  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:18:09.823649  178853 start.go:296] duration metric: took 155.345554ms for postStartSetup
	I1026 15:18:09.826848  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.827204  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.827233  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.827442  178853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/config.json ...
	I1026 15:18:09.827629  178853 start.go:128] duration metric: took 18.619193814s to createHost
	I1026 15:18:09.829867  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.830224  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.830244  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.830379  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:09.830611  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:09.830621  178853 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:18:09.943444  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761491889.896325088
	
	I1026 15:18:09.943496  178853 fix.go:216] guest clock: 1761491889.896325088
	I1026 15:18:09.943504  178853 fix.go:229] Guest: 2025-10-26 15:18:09.896325088 +0000 UTC Remote: 2025-10-26 15:18:09.827641672 +0000 UTC m=+18.731334028 (delta=68.683416ms)
	I1026 15:18:09.943521  178853 fix.go:200] guest clock delta is within tolerance: 68.683416ms
	I1026 15:18:09.943526  178853 start.go:83] releasing machines lock for "embed-certs-163393", held for 18.73515759s
	I1026 15:18:09.946765  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.947208  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.947242  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.947793  178853 ssh_runner.go:195] Run: cat /version.json
	I1026 15:18:09.947856  178853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:18:09.950746  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951062  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951265  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.951295  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951485  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:09.951628  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.951663  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951813  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:10.059000  178853 ssh_runner.go:195] Run: systemctl --version
	I1026 15:18:10.067022  178853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:18:10.219313  178853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:18:10.225970  178853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:18:10.226058  178853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:18:10.245615  178853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:18:10.245648  178853 start.go:495] detecting cgroup driver to use...
	I1026 15:18:10.245731  178853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:18:10.263804  178853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:18:10.280269  178853 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:18:10.280344  178853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:18:10.298076  178853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:18:10.312935  178853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:18:10.469319  178853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:18:10.701535  178853 docker.go:234] disabling docker service ...
	I1026 15:18:10.701621  178853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:18:10.720130  178853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:18:10.735741  178853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:18:10.912675  178853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:18:11.060726  178853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:18:11.077148  178853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:18:11.099487  178853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:18:11.099560  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.113343  178853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:18:11.113413  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.127886  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.141518  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.406388  170754 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:18:11.406516  170754 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:18:11.406634  170754 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:18:11.406771  170754 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:18:11.406929  170754 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:18:11.407054  170754 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:18:11.408925  170754 out.go:252]   - Generating certificates and keys ...
	I1026 15:18:11.409140  170754 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:18:11.409601  170754 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:18:11.409728  170754 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 15:18:11.409826  170754 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1026 15:18:11.409930  170754 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 15:18:11.410096  170754 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1026 15:18:11.410211  170754 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1026 15:18:11.410341  170754 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1026 15:18:11.410529  170754 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 15:18:11.410695  170754 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 15:18:11.410773  170754 kubeadm.go:318] [certs] Using the existing "sa" key
	I1026 15:18:11.410868  170754 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:18:11.410938  170754 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:18:11.411037  170754 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:18:11.411110  170754 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:18:11.411196  170754 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:18:11.411284  170754 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:18:11.411413  170754 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:18:11.411545  170754 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:18:11.412975  170754 out.go:252]   - Booting up control plane ...
	I1026 15:18:11.413106  170754 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:11.413220  170754 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:11.413325  170754 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:11.413490  170754 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:11.413623  170754 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:11.413769  170754 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:11.413888  170754 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:11.413946  170754 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:11.414133  170754 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:11.414294  170754 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:11.414382  170754 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124481s
	I1026 15:18:11.414524  170754 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:11.414631  170754 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.175:8443/livez
	I1026 15:18:11.414797  170754 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:11.414930  170754 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:11.415032  170754 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.179109967s
	I1026 15:18:11.415134  170754 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.430934484s
	I1026 15:18:11.415224  170754 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.001879134s
	I1026 15:18:11.415398  170754 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:11.415622  170754 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:11.415726  170754 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:11.416015  170754 kubeadm.go:318] [mark-control-plane] Marking the node pause-750553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:11.416093  170754 kubeadm.go:318] [bootstrap-token] Using token: 67vbze.ccs7edufrsqva8ht
	I1026 15:18:11.417280  170754 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:11.417404  170754 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:11.417540  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:11.417710  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:11.417902  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:11.418025  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:11.418102  170754 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:11.418230  170754 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:11.418283  170754 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:11.418352  170754 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:11.418365  170754 kubeadm.go:318] 
	I1026 15:18:11.418441  170754 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:11.418475  170754 kubeadm.go:318] 
	I1026 15:18:11.418600  170754 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:11.418612  170754 kubeadm.go:318] 
	I1026 15:18:11.418648  170754 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:11.418733  170754 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:11.418803  170754 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:11.418815  170754 kubeadm.go:318] 
	I1026 15:18:11.418891  170754 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:11.418908  170754 kubeadm.go:318] 
	I1026 15:18:11.418982  170754 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:11.418988  170754 kubeadm.go:318] 
	I1026 15:18:11.419075  170754 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:11.419194  170754 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:11.419303  170754 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:11.419316  170754 kubeadm.go:318] 
	I1026 15:18:11.419417  170754 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:11.419559  170754 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:11.419570  170754 kubeadm.go:318] 
	I1026 15:18:11.419679  170754 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 67vbze.ccs7edufrsqva8ht \
	I1026 15:18:11.419850  170754 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 15:18:11.419881  170754 kubeadm.go:318] 	--control-plane 
	I1026 15:18:11.419892  170754 kubeadm.go:318] 
	I1026 15:18:11.420003  170754 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:11.420019  170754 kubeadm.go:318] 
	I1026 15:18:11.420132  170754 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 67vbze.ccs7edufrsqva8ht \
	I1026 15:18:11.420283  170754 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 15:18:11.420312  170754 cni.go:84] Creating CNI manager for ""
	I1026 15:18:11.420322  170754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:11.421589  170754 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:18:11.155262  178853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:18:11.167436  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.179905  178853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.200259  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.212634  178853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:18:11.223171  178853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:18:11.223224  178853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:18:11.245165  178853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:18:11.257714  178853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:11.419386  178853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:18:11.555859  178853 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:18:11.555959  178853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:18:11.561593  178853 start.go:563] Will wait 60s for crictl version
	I1026 15:18:11.561676  178853 ssh_runner.go:195] Run: which crictl
	I1026 15:18:11.565500  178853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:18:11.605904  178853 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:18:11.605997  178853 ssh_runner.go:195] Run: crio --version
	I1026 15:18:11.639831  178853 ssh_runner.go:195] Run: crio --version
	I1026 15:18:11.676581  178853 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:18:08.479112  177820 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:08.479146  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:08.479158  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:08.479169  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:08.479177  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:08.479183  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Running
	I1026 15:18:08.479190  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:08.479196  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Running
	I1026 15:18:08.479217  177820 system_pods.go:126] duration metric: took 2.077014595s to wait for k8s-apps to be running ...
	I1026 15:18:08.479231  177820 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:08.479288  177820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:08.499611  177820 system_svc.go:56] duration metric: took 20.370764ms WaitForService to wait for kubelet
	I1026 15:18:08.499644  177820 kubeadm.go:586] duration metric: took 3.395489547s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:08.499662  177820 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:08.503525  177820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:18:08.503563  177820 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:08.503581  177820 node_conditions.go:105] duration metric: took 3.913686ms to run NodePressure ...
	I1026 15:18:08.503596  177820 start.go:241] waiting for startup goroutines ...
	I1026 15:18:08.503606  177820 start.go:246] waiting for cluster config update ...
	I1026 15:18:08.503626  177820 start.go:255] writing updated cluster config ...
	I1026 15:18:08.503959  177820 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:08.510701  177820 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:08.515655  177820 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqsf7" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:10.524590  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:11.422571  170754 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:18:11.443884  170754 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:18:11.489694  170754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:11.489800  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:11.489839  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-750553 minikube.k8s.io/updated_at=2025_10_26T15_18_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=pause-750553 minikube.k8s.io/primary=true
	I1026 15:18:11.650540  170754 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:11.650684  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:12.150825  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:12.651258  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:13.150904  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1026 15:18:11.698977  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:13.699502  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:11.680507  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:11.681031  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:11.681072  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:11.681337  178853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 15:18:11.686204  178853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:11.703014  178853 kubeadm.go:883] updating cluster {Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:18:11.703130  178853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:11.703175  178853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:11.740765  178853 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:18:11.740837  178853 ssh_runner.go:195] Run: which lz4
	I1026 15:18:11.745258  178853 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:18:11.750220  178853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:18:11.750270  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 15:18:13.211048  178853 crio.go:462] duration metric: took 1.465833967s to copy over tarball
	I1026 15:18:13.211132  178853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:18:14.868710  178853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657537205s)
	I1026 15:18:14.868739  178853 crio.go:469] duration metric: took 1.657660446s to extract the tarball
	I1026 15:18:14.868746  178853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:18:14.910498  178853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:14.952967  178853 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:18:14.952994  178853 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:18:14.953003  178853 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.34.1 crio true true} ...
	I1026 15:18:14.953100  178853 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-163393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:18:14.953179  178853 ssh_runner.go:195] Run: crio config
	I1026 15:18:15.000853  178853 cni.go:84] Creating CNI manager for ""
	I1026 15:18:15.000882  178853 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:15.000902  178853 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:18:15.000925  178853 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-163393 NodeName:embed-certs-163393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:18:15.001061  178853 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-163393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.103"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:18:15.001137  178853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:18:15.013227  178853 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:18:15.013306  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:18:15.024987  178853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1026 15:18:15.046596  178853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:18:15.066440  178853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1026 15:18:15.088110  178853 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I1026 15:18:15.092193  178853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:15.106183  178853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:15.252263  178853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:15.272696  178853 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393 for IP: 192.168.39.103
	I1026 15:18:15.272723  178853 certs.go:195] generating shared ca certs ...
	I1026 15:18:15.272747  178853 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.272953  178853 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:18:15.273048  178853 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:18:15.273072  178853 certs.go:257] generating profile certs ...
	I1026 15:18:15.273156  178853 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.key
	I1026 15:18:15.273182  178853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.crt with IP's: []
	I1026 15:18:15.379843  178853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.crt ...
	I1026 15:18:15.379878  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.crt: {Name:mk5da6a5a1fc7e75e614932409f60fb9762a0166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.380065  178853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.key ...
	I1026 15:18:15.380077  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.key: {Name:mkf872e3e0d0cb7b05c86f855281eddc4679f1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.380154  178853 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59
	I1026 15:18:15.380169  178853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103]
	I1026 15:18:15.699094  178853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59 ...
	I1026 15:18:15.699130  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59: {Name:mkc706865f91fcd7025cc2a28277beb6ca475281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.699349  178853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59 ...
	I1026 15:18:15.699375  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59: {Name:mk4320a3c4fce39f17ee887c1fbe61aad1c9704e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.699543  178853 certs.go:382] copying /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59 -> /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt
	I1026 15:18:15.699649  178853 certs.go:386] copying /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59 -> /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key
	I1026 15:18:15.699749  178853 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key
	I1026 15:18:15.699774  178853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt with IP's: []
	I1026 15:18:15.977029  178853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt ...
	I1026 15:18:15.977071  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt: {Name:mkec8a558b297e96f1f00ed264aad0379456c2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.977282  178853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key ...
	I1026 15:18:15.977302  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key: {Name:mk09f0e9bc803db49e88aa8d09e85d4d23fe2fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.977557  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:18:15.977616  178853 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:18:15.977632  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:18:15.977670  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:18:15.977711  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:18:15.977750  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:18:15.977824  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:18:15.978500  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:18:16.017198  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:18:16.052335  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:18:16.082191  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:18:16.112650  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:18:16.143032  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:18:16.178952  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:18:16.210684  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:18:16.251927  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:18:16.285150  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:18:16.320115  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:18:16.350222  178853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:18:16.371943  178853 ssh_runner.go:195] Run: openssl version
	I1026 15:18:16.378048  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:18:16.392830  178853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:18:16.397966  178853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:18:16.398028  178853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:18:16.405383  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:18:16.418616  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:18:16.431923  178853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:18:16.437188  178853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:18:16.437254  178853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:18:16.444195  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:18:16.457103  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:18:16.469825  178853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:16.474716  178853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:16.474769  178853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:16.481766  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:18:16.498345  178853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:18:16.503955  178853 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:18:16.504023  178853 kubeadm.go:400] StartCluster: {Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:16.504124  178853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:18:16.504189  178853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:18:16.549983  178853 cri.go:89] found id: ""
	I1026 15:18:16.550087  178853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:18:16.563167  178853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:18:16.576038  178853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:18:16.591974  178853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:18:16.591995  178853 kubeadm.go:157] found existing configuration files:
	
	I1026 15:18:16.592071  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:18:16.606260  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:18:16.606357  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:18:16.621665  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:18:16.634690  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:18:16.634794  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:18:16.648108  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:18:16.661844  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:18:16.661906  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:18:16.675279  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:18:16.687770  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:18:16.687854  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:18:16.704923  178853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 15:18:16.770011  178853 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:18:16.770071  178853 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:18:16.861261  178853 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:18:16.861404  178853 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:18:16.861550  178853 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:18:16.875045  178853 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:18:13.651733  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:14.151352  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:14.651308  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:15.151690  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:15.651712  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:16.151207  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:16.651180  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:17.006229  170754 kubeadm.go:1113] duration metric: took 5.516499533s to wait for elevateKubeSystemPrivileges
	I1026 15:18:17.006271  170754 kubeadm.go:402] duration metric: took 4m21.183330721s to StartCluster
	I1026 15:18:17.006293  170754 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:17.006400  170754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:18:17.008320  170754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:17.080354  170754 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:17.080472  170754 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:17.080672  170754 config.go:182] Loaded profile config "pause-750553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:17.119686  170754 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:17.119691  170754 out.go:179] * Enabled addons: 
	W1026 15:18:13.023603  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:15.522717  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:17.188959  170754 addons.go:514] duration metric: took 108.487064ms for enable addons: enabled=[]
	I1026 15:18:17.189008  170754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:17.373798  170754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:17.394714  170754 node_ready.go:35] waiting up to 6m0s for node "pause-750553" to be "Ready" ...
	I1026 15:18:18.205474  170754 node_ready.go:49] node "pause-750553" is "Ready"
	I1026 15:18:18.205509  170754 node_ready.go:38] duration metric: took 810.745524ms for node "pause-750553" to be "Ready" ...
	I1026 15:18:18.205528  170754 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:18.205594  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:18.239724  170754 api_server.go:72] duration metric: took 1.159313338s to wait for apiserver process to appear ...
	I1026 15:18:18.239759  170754 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:18.239780  170754 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	W1026 15:18:16.201574  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:18.700337  176942 pod_ready.go:94] pod "coredns-5dd5756b68-46566" is "Ready"
	I1026 15:18:18.700370  176942 pod_ready.go:86] duration metric: took 38.007455801s for pod "coredns-5dd5756b68-46566" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.700382  176942 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.702801  176942 pod_ready.go:99] pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace is gone: getting pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace (will retry): pods "coredns-5dd5756b68-6wbnw" not found
	I1026 15:18:18.702822  176942 pod_ready.go:86] duration metric: took 2.431905ms for pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.707225  176942 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.713493  176942 pod_ready.go:94] pod "etcd-old-k8s-version-065983" is "Ready"
	I1026 15:18:18.713533  176942 pod_ready.go:86] duration metric: took 6.2848ms for pod "etcd-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.718250  176942 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.724818  176942 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-065983" is "Ready"
	I1026 15:18:18.724848  176942 pod_ready.go:86] duration metric: took 6.569944ms for pod "kube-apiserver-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.727781  176942 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.096810  176942 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-065983" is "Ready"
	I1026 15:18:19.096843  176942 pod_ready.go:86] duration metric: took 369.033655ms for pod "kube-controller-manager-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.299181  176942 pod_ready.go:83] waiting for pod "kube-proxy-bs4p4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.696333  176942 pod_ready.go:94] pod "kube-proxy-bs4p4" is "Ready"
	I1026 15:18:19.696365  176942 pod_ready.go:86] duration metric: took 397.149805ms for pod "kube-proxy-bs4p4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.897834  176942 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.296898  176942 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-065983" is "Ready"
	I1026 15:18:20.296932  176942 pod_ready.go:86] duration metric: took 399.056756ms for pod "kube-scheduler-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.296945  176942 pod_ready.go:40] duration metric: took 39.608901275s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:20.341895  176942 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:18:20.343471  176942 out.go:203] 
	W1026 15:18:20.344571  176942 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:18:20.345531  176942 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:18:20.346709  176942 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-065983" cluster and "default" namespace by default
	I1026 15:18:16.929950  178853 out.go:252]   - Generating certificates and keys ...
	I1026 15:18:16.930121  178853 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:18:16.930246  178853 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:18:17.275036  178853 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:18:17.424652  178853 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:18:17.694323  178853 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:18:17.760202  178853 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:18:18.278551  178853 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:18:18.278696  178853 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-163393 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I1026 15:18:18.594369  178853 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:18:18.594695  178853 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-163393 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I1026 15:18:19.245169  178853 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:18:19.641449  178853 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:18:19.986891  178853 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:18:19.986956  178853 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:18:20.043176  178853 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:18:20.511025  178853 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:18:20.810927  178853 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:18:21.135955  178853 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:18:21.353393  178853 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:18:21.354093  178853 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:18:21.356485  178853 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1026 15:18:18.497737  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:20.521956  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:22.523790  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:18.669171  170754 api_server.go:279] https://192.168.72.175:8443/healthz returned 200:
	ok
	I1026 15:18:18.672092  170754 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:18.672158  170754 api_server.go:131] duration metric: took 432.389369ms to wait for apiserver health ...
	I1026 15:18:18.672171  170754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:18.694765  170754 system_pods.go:59] 7 kube-system pods found
	I1026 15:18:18.694803  170754 system_pods.go:61] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending
	I1026 15:18:18.694811  170754 system_pods.go:61] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending
	I1026 15:18:18.694824  170754 system_pods.go:61] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:18.694833  170754 system_pods.go:61] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:18.694844  170754 system_pods.go:61] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:18.694853  170754 system_pods.go:61] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:18.694860  170754 system_pods.go:61] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:18.694872  170754 system_pods.go:74] duration metric: took 22.693932ms to wait for pod list to return data ...
	I1026 15:18:18.694885  170754 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:18.707135  170754 default_sa.go:45] found service account: "default"
	I1026 15:18:18.707160  170754 default_sa.go:55] duration metric: took 12.263129ms for default service account to be created ...
	I1026 15:18:18.707171  170754 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:18.722583  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:18.722623  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:18.722631  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending
	I1026 15:18:18.722641  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:18.722649  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:18.722658  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:18.722670  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:18.722677  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:18.722710  170754 retry.go:31] will retry after 192.605388ms: missing components: kube-dns, kube-proxy
	I1026 15:18:18.920102  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:18.920132  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:18.920140  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:18.920146  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:18.920154  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:18.920160  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:18.920165  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:18.920171  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:18.920190  170754 retry.go:31] will retry after 347.817824ms: missing components: kube-dns
	I1026 15:18:19.274785  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:19.274825  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.274835  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.274843  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:19.274852  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:19.274861  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:19.274867  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:19.274874  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:19.274898  170754 retry.go:31] will retry after 438.1694ms: missing components: kube-dns
	I1026 15:18:19.717863  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:19.717910  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.717924  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.717936  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:19.717948  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:19.717958  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:19.717969  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:19.717973  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:19.717995  170754 retry.go:31] will retry after 411.129085ms: missing components: kube-dns
	I1026 15:18:20.133294  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:20.133328  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Running
	I1026 15:18:20.133336  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:20.133341  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:20.133348  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:20.133354  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:20.133359  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:20.133363  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:20.133374  170754 system_pods.go:126] duration metric: took 1.426195569s to wait for k8s-apps to be running ...
	I1026 15:18:20.133381  170754 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:20.133428  170754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:20.151662  170754 system_svc.go:56] duration metric: took 18.267544ms WaitForService to wait for kubelet
	I1026 15:18:20.151702  170754 kubeadm.go:586] duration metric: took 3.071293423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:20.151725  170754 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:20.155023  170754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:18:20.155069  170754 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:20.155089  170754 node_conditions.go:105] duration metric: took 3.356426ms to run NodePressure ...
	I1026 15:18:20.155106  170754 start.go:241] waiting for startup goroutines ...
	I1026 15:18:20.155122  170754 start.go:246] waiting for cluster config update ...
	I1026 15:18:20.155134  170754 start.go:255] writing updated cluster config ...
	I1026 15:18:20.155530  170754 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:20.160694  170754 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:20.161558  170754 kapi.go:59] client config for pause-750553: &rest.Config{Host:"https://192.168.72.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/client.key", CAFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 15:18:20.164103  170754 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5km5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.168561  170754 pod_ready.go:94] pod "coredns-66bc5c9577-5km5n" is "Ready"
	I1026 15:18:20.168577  170754 pod_ready.go:86] duration metric: took 4.454081ms for pod "coredns-66bc5c9577-5km5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.168584  170754 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-77frh" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:22.175599  170754 pod_ready.go:104] pod "coredns-66bc5c9577-77frh" is not "Ready", error: <nil>
	I1026 15:18:21.357857  178853 out.go:252]   - Booting up control plane ...
	I1026 15:18:21.357974  178853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:21.360202  178853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:21.361790  178853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:21.379952  178853 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:21.380129  178853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:21.387763  178853 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:21.388240  178853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:21.388337  178853 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:21.560942  178853 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:21.561078  178853 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:22.562531  178853 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001586685s
	I1026 15:18:22.565505  178853 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:22.565613  178853 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.103:8443/livez
	I1026 15:18:22.565724  178853 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:22.565818  178853 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:25.181252  178853 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.617527962s
	W1026 15:18:25.023498  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:27.023808  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:26.840883  178853 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.278540407s
	I1026 15:18:28.066772  178853 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.50503235s
	I1026 15:18:28.083331  178853 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:28.102934  178853 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:28.120717  178853 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:28.120995  178853 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-163393 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:28.139172  178853 kubeadm.go:318] [bootstrap-token] Using token: uv77ly.o2qdsd5r72jmaiwn
	W1026 15:18:24.675876  170754 pod_ready.go:104] pod "coredns-66bc5c9577-77frh" is not "Ready", error: <nil>
	W1026 15:18:27.176609  170754 pod_ready.go:104] pod "coredns-66bc5c9577-77frh" is not "Ready", error: <nil>
	I1026 15:18:28.174961  170754 pod_ready.go:94] pod "coredns-66bc5c9577-77frh" is "Ready"
	I1026 15:18:28.175002  170754 pod_ready.go:86] duration metric: took 8.006410722s for pod "coredns-66bc5c9577-77frh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.177658  170754 pod_ready.go:83] waiting for pod "etcd-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.183452  170754 pod_ready.go:94] pod "etcd-pause-750553" is "Ready"
	I1026 15:18:28.183508  170754 pod_ready.go:86] duration metric: took 5.819403ms for pod "etcd-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.185497  170754 pod_ready.go:83] waiting for pod "kube-apiserver-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.189812  170754 pod_ready.go:94] pod "kube-apiserver-pause-750553" is "Ready"
	I1026 15:18:28.189832  170754 pod_ready.go:86] duration metric: took 4.312219ms for pod "kube-apiserver-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.192873  170754 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.140488  178853 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:28.140649  178853 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:28.156992  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:28.175040  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:28.179874  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:28.184364  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:28.188857  178853 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:28.475121  178853 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:28.955825  178853 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:29.474209  178853 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:29.475071  178853 kubeadm.go:318] 
	I1026 15:18:29.475158  178853 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:29.475199  178853 kubeadm.go:318] 
	I1026 15:18:29.475323  178853 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:29.475334  178853 kubeadm.go:318] 
	I1026 15:18:29.475371  178853 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:29.475489  178853 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:29.475577  178853 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:29.475587  178853 kubeadm.go:318] 
	I1026 15:18:29.475672  178853 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:29.475681  178853 kubeadm.go:318] 
	I1026 15:18:29.475772  178853 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:29.475793  178853 kubeadm.go:318] 
	I1026 15:18:29.475911  178853 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:29.476047  178853 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:29.476168  178853 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:29.476180  178853 kubeadm.go:318] 
	I1026 15:18:29.476305  178853 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:29.476412  178853 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:29.476421  178853 kubeadm.go:318] 
	I1026 15:18:29.476548  178853 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token uv77ly.o2qdsd5r72jmaiwn \
	I1026 15:18:29.476713  178853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 15:18:29.476749  178853 kubeadm.go:318] 	--control-plane 
	I1026 15:18:29.476758  178853 kubeadm.go:318] 
	I1026 15:18:29.476866  178853 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:29.476877  178853 kubeadm.go:318] 
	I1026 15:18:29.476976  178853 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token uv77ly.o2qdsd5r72jmaiwn \
	I1026 15:18:29.477165  178853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 15:18:29.477909  178853 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:29.477943  178853 cni.go:84] Creating CNI manager for ""
	I1026 15:18:29.477957  178853 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:29.479970  178853 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:18:28.374112  170754 pod_ready.go:94] pod "kube-controller-manager-pause-750553" is "Ready"
	I1026 15:18:28.374139  170754 pod_ready.go:86] duration metric: took 181.243358ms for pod "kube-controller-manager-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.574273  170754 pod_ready.go:83] waiting for pod "kube-proxy-5bgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.974129  170754 pod_ready.go:94] pod "kube-proxy-5bgtf" is "Ready"
	I1026 15:18:28.974172  170754 pod_ready.go:86] duration metric: took 399.869701ms for pod "kube-proxy-5bgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:29.174265  170754 pod_ready.go:83] waiting for pod "kube-scheduler-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:29.572768  170754 pod_ready.go:94] pod "kube-scheduler-pause-750553" is "Ready"
	I1026 15:18:29.572795  170754 pod_ready.go:86] duration metric: took 398.503317ms for pod "kube-scheduler-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:29.572809  170754 pod_ready.go:40] duration metric: took 9.412085156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:29.629035  170754 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:18:29.631724  170754 out.go:179] * Done! kubectl is now configured to use "pause-750553" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.285694876Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491910285671873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02c64727-7a7e-4c4f-89f0-05b4dbb6c0ce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.286327213Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=108cb487-7929-4890-8694-b70f6265c04b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.286401639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=108cb487-7929-4890-8694-b70f6265c04b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.286675271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=108cb487-7929-4890-8694-b70f6265c04b name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.328564286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2e5c2ba-bd83-4a09-abb0-8e6be01e21af name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.328633295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2e5c2ba-bd83-4a09-abb0-8e6be01e21af name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.329928295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5aa28846-3ee1-4ee7-822d-106dd6239ef2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.330400932Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491910330377710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5aa28846-3ee1-4ee7-822d-106dd6239ef2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.330937189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3a571e7-01be-4758-815d-3094f3f5bc30 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.331019735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3a571e7-01be-4758-815d-3094f3f5bc30 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.331285066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3a571e7-01be-4758-815d-3094f3f5bc30 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.372513701Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60b61755-ff3a-4b10-8449-601422863a62 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.372683089Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60b61755-ff3a-4b10-8449-601422863a62 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.374896758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=814dff2e-822a-45ca-b686-3add10727c19 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.375503396Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491910375476563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=814dff2e-822a-45ca-b686-3add10727c19 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.376359118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa766d05-f344-4425-92d5-2f19a980ff64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.376536860Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa766d05-f344-4425-92d5-2f19a980ff64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.376733781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa766d05-f344-4425-92d5-2f19a980ff64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.417686618Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18b823a2-9d9a-4177-b067-a23954bfc3f4 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.417767973Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18b823a2-9d9a-4177-b067-a23954bfc3f4 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.419023464Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12b3f355-17ac-4f9e-abd2-ade18cbecc3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.419470898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491910419449607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12b3f355-17ac-4f9e-abd2-ade18cbecc3e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.420014457Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3dab5aba-a159-447a-89e7-416f790c261c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.420155581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3dab5aba-a159-447a-89e7-416f790c261c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:30 pause-750553 crio[3324]: time="2025-10-26 15:18:30.420342111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3dab5aba-a159-447a-89e7-416f790c261c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6000f3862c3bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   94ed34380cfcf       coredns-66bc5c9577-77frh
	877e736c8a771       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   72f7e3e974ada       coredns-66bc5c9577-5km5n
	93975cb982e3c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   12 seconds ago      Running             kube-proxy                0                   0fb764a83b14e       kube-proxy-5bgtf
	4b42948e1829e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   25 seconds ago      Running             kube-controller-manager   1                   44b7aa6a20a66       kube-controller-manager-pause-750553
	06f4c2a4038ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   25 seconds ago      Running             etcd                      3                   5f708a2c2e9f6       etcd-pause-750553
	9e1e8ba0c0240       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   25 seconds ago      Running             kube-scheduler            3                   d73d285385390       kube-scheduler-pause-750553
	9be07d11240a8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   25 seconds ago      Running             kube-apiserver            1                   8862f3b75169e       kube-apiserver-pause-750553
	
	
	==> coredns [6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               pause-750553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-750553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=pause-750553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-750553
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.175
	  Hostname:    pause-750553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 ded7bfe485724686a7a119dd93a16d6b
	  System UUID:                ded7bfe4-8572-4686-a7a1-19dd93a16d6b
	  Boot ID:                    5113f6ec-d58a-4acb-8b90-586e2ab854c9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5km5n                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     13s
	  kube-system                 coredns-66bc5c9577-77frh                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     13s
	  kube-system                 etcd-pause-750553                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         20s
	  kube-system                 kube-apiserver-pause-750553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-controller-manager-pause-750553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-proxy-5bgtf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 kube-scheduler-pause-750553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11s   kube-proxy       
	  Normal  Starting                 20s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20s   kubelet          Node pause-750553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s   kubelet          Node pause-750553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s   kubelet          Node pause-750553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15s   node-controller  Node pause-750553 event: Registered Node pause-750553 in Controller
	
	
	==> dmesg <==
	[Oct26 15:11] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000076] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007466] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.163488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089232] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108001] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.135228] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.255252] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.670777] kauditd_printk_skb: 222 callbacks suppressed
	[Oct26 15:12] kauditd_printk_skb: 38 callbacks suppressed
	[Oct26 15:13] kauditd_printk_skb: 247 callbacks suppressed
	[Oct26 15:17] kauditd_printk_skb: 124 callbacks suppressed
	[Oct26 15:18] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.150519] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.924778] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.796551] kauditd_printk_skb: 140 callbacks suppressed
	
	
	==> etcd [06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b8188744] <==
	{"level":"info","ts":"2025-10-26T15:18:18.197468Z","caller":"traceutil/trace.go:172","msg":"trace[1134370359] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"1.098385485s","start":"2025-10-26T15:18:17.099049Z","end":"2025-10-26T15:18:18.197435Z","steps":["trace[1134370359] 'process raft request'  (duration: 547.078131ms)","trace[1134370359] 'compare'  (duration: 548.93273ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:18:18.198201Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.099024Z","time spent":"1.099115874s","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":762,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-5bgtf.1872138c311adbb3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-5bgtf.1872138c311adbb3\" value_size:682 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.199494Z","caller":"traceutil/trace.go:172","msg":"trace[15442059] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"900.125476ms","start":"2025-10-26T15:18:17.299354Z","end":"2025-10-26T15:18:18.199480Z","steps":["trace[15442059] 'process raft request'  (duration: 896.608019ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.199832Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.299311Z","time spent":"900.29979ms","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":704,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c3d05a1bb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c3d05a1bb\" value_size:622 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.200743Z","caller":"traceutil/trace.go:172","msg":"trace[1637677112] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"898.044101ms","start":"2025-10-26T15:18:17.302326Z","end":"2025-10-26T15:18:18.200370Z","steps":["trace[1637677112] 'process raft request'  (duration: 893.686293ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.200846Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.302311Z","time spent":"898.471763ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3812,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" value_size:3753 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.200938Z","caller":"traceutil/trace.go:172","msg":"trace[920803324] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"898.411822ms","start":"2025-10-26T15:18:17.302510Z","end":"2025-10-26T15:18:18.200921Z","steps":["trace[920803324] 'process raft request'  (duration: 893.538782ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.204474Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.302502Z","time spent":"898.45714ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3864,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-5km5n\" mod_revision:327 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-5km5n\" value_size:3805 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-5km5n\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.654437Z","caller":"traceutil/trace.go:172","msg":"trace[1244551045] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"440.187335ms","start":"2025-10-26T15:18:18.214208Z","end":"2025-10-26T15:18:18.654395Z","steps":["trace[1244551045] 'process raft request'  (duration: 439.953323ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.654626Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.214190Z","time spent":"440.308132ms","remote":"127.0.0.1:48992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4041,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:294 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:3981 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.658045Z","caller":"traceutil/trace.go:172","msg":"trace[1379694249] linearizableReadLoop","detail":"{readStateIndex:341; appliedIndex:341; }","duration":"416.294224ms","start":"2025-10-26T15:18:18.237556Z","end":"2025-10-26T15:18:18.653850Z","steps":["trace[1379694249] 'read index received'  (duration: 416.287897ms)","trace[1379694249] 'applied index is now lower than readState.Index'  (duration: 5.11µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:18:18.659724Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"422.176747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T15:18:18.659832Z","caller":"traceutil/trace.go:172","msg":"trace[890621526] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:332; }","duration":"422.293886ms","start":"2025-10-26T15:18:18.237524Z","end":"2025-10-26T15:18:18.659818Z","steps":["trace[890621526] 'agreement among raft nodes before linearized reading'  (duration: 422.095836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.659875Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.237510Z","time spent":"422.350905ms","remote":"127.0.0.1:47984","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-26T15:18:18.661379Z","caller":"traceutil/trace.go:172","msg":"trace[552245983] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"444.454992ms","start":"2025-10-26T15:18:18.216905Z","end":"2025-10-26T15:18:18.661360Z","steps":["trace[552245983] 'process raft request'  (duration: 444.230925ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.661781Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.216887Z","time spent":"444.651428ms","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":704,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c73139c4f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c73139c4f\" value_size:622 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.662617Z","caller":"traceutil/trace.go:172","msg":"trace[143703881] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"116.569075ms","start":"2025-10-26T15:18:18.545908Z","end":"2025-10-26T15:18:18.662477Z","steps":["trace[143703881] 'process raft request'  (duration: 116.521675ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:18:18.662944Z","caller":"traceutil/trace.go:172","msg":"trace[1415387288] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"443.743937ms","start":"2025-10-26T15:18:18.219189Z","end":"2025-10-26T15:18:18.662932Z","steps":["trace[1415387288] 'process raft request'  (duration: 442.057429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.663032Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.219175Z","time spent":"443.816235ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3864,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" mod_revision:330 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" value_size:3805 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.663301Z","caller":"traceutil/trace.go:172","msg":"trace[314646202] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"436.215141ms","start":"2025-10-26T15:18:18.227072Z","end":"2025-10-26T15:18:18.663288Z","steps":["trace[314646202] 'process raft request'  (duration: 435.226263ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.663470Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.227055Z","time spent":"436.276909ms","remote":"127.0.0.1:48476","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-b47hz2nhtkyt3kispd6ru45xuq\" mod_revision:19 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-b47hz2nhtkyt3kispd6ru45xuq\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-b47hz2nhtkyt3kispd6ru45xuq\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.664041Z","caller":"traceutil/trace.go:172","msg":"trace[296606446] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"442.818054ms","start":"2025-10-26T15:18:18.221212Z","end":"2025-10-26T15:18:18.664030Z","steps":["trace[296606446] 'process raft request'  (duration: 440.084665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.664212Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.221201Z","time spent":"442.979759ms","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":723,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-5km5n.1872138c73cfc66b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-5km5n.1872138c73cfc66b\" value_size:635 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.665274Z","caller":"traceutil/trace.go:172","msg":"trace[946875473] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"435.376136ms","start":"2025-10-26T15:18:18.229887Z","end":"2025-10-26T15:18:18.665263Z","steps":["trace[946875473] 'process raft request'  (duration: 432.506355ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.665521Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.229873Z","time spent":"435.528774ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5955,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-750553\" mod_revision:266 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-750553\" value_size:5903 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-750553\" > >"}
	
	
	==> kernel <==
	 15:18:30 up 7 min,  0 users,  load average: 0.99, 0.50, 0.24
	Linux pause-750553 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec] <==
	I1026 15:18:08.142353       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:18:08.142376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:18:08.142392       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:18:08.166442       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:08.168606       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:18:08.189043       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:18:08.196211       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:08.200729       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:18:08.902544       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:18:08.911206       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:18:08.911241       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:18:09.522003       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:18:09.569718       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:18:09.720049       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:18:09.731827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.175]
	I1026 15:18:09.733561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:18:09.739716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:18:10.447955       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:18:10.813642       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:18:10.846556       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:18:10.860225       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:18:15.836386       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:18:16.137023       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:18:16.501275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:16.506435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3] <==
	I1026 15:18:15.433786       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:18:15.434291       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:18:15.434351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:18:15.434812       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:18:15.436171       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:18:15.436189       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:18:15.436238       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:18:15.437444       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:18:15.437492       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:18:15.437540       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:18:15.437572       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:18:15.437619       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:18:15.438960       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:18:15.439073       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:18:15.439133       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:18:15.439139       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:18:15.439143       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:18:15.442387       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:18:15.444823       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:18:15.452816       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-750553" podCIDRs=["10.244.0.0/24"]
	I1026 15:18:15.453891       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:18:15.457160       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:18:15.458429       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:18:15.472951       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:18:15.474190       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e] <==
	I1026 15:18:18.640259       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:18:18.740718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:18:18.740744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.175"]
	E1026 15:18:18.740802       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:18:18.809822       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 15:18:18.809952       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:18:18.809980       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:18:18.819475       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:18:18.819753       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:18:18.819780       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:18.825798       1 config.go:200] "Starting service config controller"
	I1026 15:18:18.825829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:18:18.825864       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:18:18.825867       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:18:18.825877       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:18:18.825880       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:18:18.832865       1 config.go:309] "Starting node config controller"
	I1026 15:18:18.832896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:18:18.832902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:18:18.926283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:18:18.926454       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:18:18.926471       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5] <==
	E1026 15:18:08.159700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:18:08.159758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:18:08.159813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:18:08.160069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:18:08.164559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:18:08.167060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:18:08.167210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:18:08.168576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:18:08.171372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:18:08.174137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:18:08.175261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:18:08.175323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:18:08.175474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:18:08.175504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:18:08.175555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:18:08.180191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:18:08.180193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:18:09.038163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:18:09.048501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:18:09.059588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:18:09.098306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:18:09.116035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:18:09.199502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:18:09.257997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1026 15:18:09.832655       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:18:11 pause-750553 kubelet[10545]: E1026 15:18:11.861021   10545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-750553\" already exists" pod="kube-system/kube-apiserver-pause-750553"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: E1026 15:18:11.861646   10545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-750553\" already exists" pod="kube-system/etcd-pause-750553"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: E1026 15:18:11.861763   10545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-750553\" already exists" pod="kube-system/kube-scheduler-pause-750553"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: I1026 15:18:11.894886   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-750553" podStartSLOduration=1.894869929 podStartE2EDuration="1.894869929s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.882070242 +0000 UTC m=+1.237954049" watchObservedRunningTime="2025-10-26 15:18:11.894869929 +0000 UTC m=+1.250753733"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: I1026 15:18:11.910717   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-750553" podStartSLOduration=1.910702003 podStartE2EDuration="1.910702003s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.896173355 +0000 UTC m=+1.252057163" watchObservedRunningTime="2025-10-26 15:18:11.910702003 +0000 UTC m=+1.266585844"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: I1026 15:18:11.925450   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-750553" podStartSLOduration=1.925426055 podStartE2EDuration="1.925426055s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.911561552 +0000 UTC m=+1.267445365" watchObservedRunningTime="2025-10-26 15:18:11.925426055 +0000 UTC m=+1.281309864"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.171669   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-750553" podStartSLOduration=6.17163011 podStartE2EDuration="6.17163011s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.926478978 +0000 UTC m=+1.282362767" watchObservedRunningTime="2025-10-26 15:18:16.17163011 +0000 UTC m=+5.527513944"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197132   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-xtables-lock\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197168   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-kube-proxy\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197183   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmpg\" (UniqueName: \"kubernetes.io/projected/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-kube-api-access-htmpg\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197202   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-lib-modules\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.718759   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da30f29b-ab29-4d65-ba42-0626bad52267-config-volume\") pod \"coredns-66bc5c9577-5km5n\" (UID: \"da30f29b-ab29-4d65-ba42-0626bad52267\") " pod="kube-system/coredns-66bc5c9577-5km5n"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.719482   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6fm8\" (UniqueName: \"kubernetes.io/projected/af90376e-433e-4f19-b0c8-0ddf58a79b0b-kube-api-access-s6fm8\") pod \"coredns-66bc5c9577-77frh\" (UID: \"af90376e-433e-4f19-b0c8-0ddf58a79b0b\") " pod="kube-system/coredns-66bc5c9577-77frh"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.719681   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstb8\" (UniqueName: \"kubernetes.io/projected/da30f29b-ab29-4d65-ba42-0626bad52267-kube-api-access-mstb8\") pod \"coredns-66bc5c9577-5km5n\" (UID: \"da30f29b-ab29-4d65-ba42-0626bad52267\") " pod="kube-system/coredns-66bc5c9577-5km5n"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.719715   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af90376e-433e-4f19-b0c8-0ddf58a79b0b-config-volume\") pod \"coredns-66bc5c9577-77frh\" (UID: \"af90376e-433e-4f19-b0c8-0ddf58a79b0b\") " pod="kube-system/coredns-66bc5c9577-77frh"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.893059   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5bgtf" podStartSLOduration=2.8929712800000003 podStartE2EDuration="2.89297128s" podCreationTimestamp="2025-10-26 15:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:18.892773818 +0000 UTC m=+8.248657604" watchObservedRunningTime="2025-10-26 15:18:18.89297128 +0000 UTC m=+8.248855086"
	Oct 26 15:18:19 pause-750553 kubelet[10545]: I1026 15:18:19.902377   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-77frh" podStartSLOduration=2.9023631659999998 podStartE2EDuration="2.902363166s" podCreationTimestamp="2025-10-26 15:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:19.902330952 +0000 UTC m=+9.258214759" watchObservedRunningTime="2025-10-26 15:18:19.902363166 +0000 UTC m=+9.258246972"
	Oct 26 15:18:20 pause-750553 kubelet[10545]: E1026 15:18:20.879893   10545 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761491900879413193  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 26 15:18:20 pause-750553 kubelet[10545]: E1026 15:18:20.879955   10545 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761491900879413193  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 26 15:18:21 pause-750553 kubelet[10545]: I1026 15:18:21.229256   10545 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 15:18:21 pause-750553 kubelet[10545]: I1026 15:18:21.230207   10545 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:18:22 pause-750553 kubelet[10545]: I1026 15:18:22.301329   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5km5n" podStartSLOduration=5.301313806 podStartE2EDuration="5.301313806s" podCreationTimestamp="2025-10-26 15:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:19.92125913 +0000 UTC m=+9.277142920" watchObservedRunningTime="2025-10-26 15:18:22.301313806 +0000 UTC m=+11.657197612"
	Oct 26 15:18:27 pause-750553 kubelet[10545]: I1026 15:18:27.806693   10545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:18:30 pause-750553 kubelet[10545]: E1026 15:18:30.881436   10545 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761491910880904314  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 26 15:18:30 pause-750553 kubelet[10545]: E1026 15:18:30.881454   10545 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761491910880904314  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-750553 -n pause-750553
helpers_test.go:269: (dbg) Run:  kubectl --context pause-750553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-750553 -n pause-750553
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-750553 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-750553 logs -n 25: (1.163913782s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                         ARGS                                                                         │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-961864 sudo systemctl status kubelet --all --full --no-pager                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl cat kubelet --no-pager                                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/kubernetes/kubelet.conf                                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /var/lib/kubelet/config.yaml                                                                                               │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl status docker --all --full --no-pager                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl cat docker --no-pager                                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/docker/daemon.json                                                                                                    │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo docker system info                                                                                                             │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl status cri-docker --all --full --no-pager                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl cat cri-docker --no-pager                                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                       │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                 │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cri-dockerd --version                                                                                                          │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl status containerd --all --full --no-pager                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	│ ssh     │ -p bridge-961864 sudo systemctl cat containerd --no-pager                                                                                            │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /lib/systemd/system/containerd.service                                                                                     │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo cat /etc/containerd/config.toml                                                                                                │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo containerd config dump                                                                                                         │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl status crio --all --full --no-pager                                                                                  │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo systemctl cat crio --no-pager                                                                                                  │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                        │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ ssh     │ -p bridge-961864 sudo crio config                                                                                                                    │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ delete  │ -p bridge-961864                                                                                                                                     │ bridge-961864      │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │ 26 Oct 25 15:17 UTC │
	│ start   │ -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-163393 │ jenkins │ v1.37.0 │ 26 Oct 25 15:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:17:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:17:51.147910  178853 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:17:51.148197  178853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:17:51.148215  178853 out.go:374] Setting ErrFile to fd 2...
	I1026 15:17:51.148220  178853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:17:51.148403  178853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:17:51.148903  178853 out.go:368] Setting JSON to false
	I1026 15:17:51.149878  178853 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7205,"bootTime":1761484666,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:17:51.149977  178853 start.go:141] virtualization: kvm guest
	I1026 15:17:51.151763  178853 out.go:179] * [embed-certs-163393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:17:51.153230  178853 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:17:51.153272  178853 notify.go:220] Checking for updates...
	I1026 15:17:51.155573  178853 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:17:51.156759  178853 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:17:51.158131  178853 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:17:51.159352  178853 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:17:51.160377  178853 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:17:51.162001  178853 config.go:182] Loaded profile config "no-preload-758002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:51.162148  178853 config.go:182] Loaded profile config "old-k8s-version-065983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 15:17:51.162319  178853 config.go:182] Loaded profile config "pause-750553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:17:51.162435  178853 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:17:51.203597  178853 out.go:179] * Using the kvm2 driver based on user configuration
	I1026 15:17:51.204531  178853 start.go:305] selected driver: kvm2
	I1026 15:17:51.204546  178853 start.go:925] validating driver "kvm2" against <nil>
	I1026 15:17:51.204558  178853 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:17:51.205289  178853 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 15:17:51.205575  178853 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:17:51.205602  178853 cni.go:84] Creating CNI manager for ""
	I1026 15:17:51.205666  178853 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:17:51.205677  178853 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 15:17:51.205744  178853 start.go:349] cluster config:
	{Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:17:51.205889  178853 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:17:51.207130  178853 out.go:179] * Starting "embed-certs-163393" primary control-plane node in "embed-certs-163393" cluster
	I1026 15:17:51.207973  178853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:17:51.208005  178853 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:17:51.208015  178853 cache.go:58] Caching tarball of preloaded images
	I1026 15:17:51.208098  178853 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:17:51.208110  178853 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:17:51.208185  178853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/config.json ...
	I1026 15:17:51.208201  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/config.json: {Name:mkdb48ff5a82f3eb9f8a31e51d858377286df427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:17:51.208332  178853 start.go:360] acquireMachinesLock for embed-certs-163393: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:17:51.208360  178853 start.go:364] duration metric: took 15.048µs to acquireMachinesLock for "embed-certs-163393"
	I1026 15:17:51.208377  178853 start.go:93] Provisioning new machine with config: &{Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:17:51.208425  178853 start.go:125] createHost starting for "" (driver="kvm2")
	I1026 15:17:48.698138  177820 out.go:252]   - Generating certificates and keys ...
	I1026 15:17:48.698256  177820 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:17:48.698362  177820 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:17:49.158713  177820 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:17:49.541078  177820 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:17:49.623791  177820 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:17:49.795012  177820 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:17:50.265370  177820 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:17:50.265638  177820 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-758002] and IPs [192.168.50.112 127.0.0.1 ::1]
	I1026 15:17:50.329991  177820 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:17:50.330222  177820 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-758002] and IPs [192.168.50.112 127.0.0.1 ::1]
	I1026 15:17:50.409591  177820 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:17:50.535871  177820 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:17:50.650659  177820 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:17:50.650762  177820 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:17:50.863171  177820 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:17:51.185589  177820 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:17:51.355852  177820 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:17:51.428943  177820 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:17:51.491351  177820 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:17:51.492399  177820 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:17:51.494791  177820 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:17:51.499594  177820 out.go:252]   - Booting up control plane ...
	I1026 15:17:51.499744  177820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:17:51.499859  177820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:17:51.499983  177820 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:17:51.518034  177820 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:17:51.518182  177820 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:17:51.525893  177820 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:17:51.526119  177820 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:17:51.526210  177820 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:17:51.747496  177820 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:17:51.747668  177820 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:17:52.751876  177820 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.003791326s
	I1026 15:17:52.768370  177820 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:17:52.769299  177820 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.50.112:8443/livez
	I1026 15:17:52.769448  177820 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:17:52.769578  177820 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:17:48.346173  170754 logs.go:123] Gathering logs for dmesg ...
	I1026 15:17:48.346208  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:17:48.364392  170754 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:17:48.364423  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:17:48.448657  170754 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:17:48.448677  170754 logs.go:123] Gathering logs for etcd [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922] ...
	I1026 15:17:48.448691  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:48.504353  170754 logs.go:123] Gathering logs for etcd [7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9] ...
	I1026 15:17:48.504398  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:48.555160  170754 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:17:48.555211  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:17:51.413574  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:17:51.437138  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:17:51.437209  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:17:51.486105  170754 cri.go:89] found id: "c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:51.486141  170754 cri.go:89] found id: ""
	I1026 15:17:51.486155  170754 logs.go:282] 1 containers: [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8]
	I1026 15:17:51.486228  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.490771  170754 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:17:51.490862  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:17:51.542332  170754 cri.go:89] found id: "bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:51.542362  170754 cri.go:89] found id: "7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:51.542369  170754 cri.go:89] found id: ""
	I1026 15:17:51.542380  170754 logs.go:282] 2 containers: [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9]
	I1026 15:17:51.542470  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.547814  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.553515  170754 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:17:51.553584  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:17:51.599839  170754 cri.go:89] found id: "a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:51.599865  170754 cri.go:89] found id: ""
	I1026 15:17:51.599873  170754 logs.go:282] 1 containers: [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de]
	I1026 15:17:51.599930  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.604575  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:17:51.604633  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:17:51.645484  170754 cri.go:89] found id: "f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:51.645512  170754 cri.go:89] found id: "ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:51.645518  170754 cri.go:89] found id: ""
	I1026 15:17:51.645529  170754 logs.go:282] 2 containers: [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c]
	I1026 15:17:51.645600  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.650153  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.655044  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:17:51.655091  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:17:51.694776  170754 cri.go:89] found id: "4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:51.694804  170754 cri.go:89] found id: ""
	I1026 15:17:51.694815  170754 logs.go:282] 1 containers: [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f]
	I1026 15:17:51.694884  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.700514  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:17:51.700581  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:17:51.741646  170754 cri.go:89] found id: "d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:51.741681  170754 cri.go:89] found id: ""
	I1026 15:17:51.741695  170754 logs.go:282] 1 containers: [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5]
	I1026 15:17:51.741764  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:51.747517  170754 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:17:51.747585  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:17:51.798072  170754 cri.go:89] found id: ""
	I1026 15:17:51.798115  170754 logs.go:282] 0 containers: []
	W1026 15:17:51.798139  170754 logs.go:284] No container was found matching "kindnet"
	I1026 15:17:51.798163  170754 logs.go:123] Gathering logs for kube-proxy [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f] ...
	I1026 15:17:51.798185  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:51.841651  170754 logs.go:123] Gathering logs for kube-controller-manager [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5] ...
	I1026 15:17:51.841684  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:51.893207  170754 logs.go:123] Gathering logs for container status ...
	I1026 15:17:51.893250  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:17:51.939864  170754 logs.go:123] Gathering logs for dmesg ...
	I1026 15:17:51.939900  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:17:51.955512  170754 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:17:51.955541  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:17:52.032163  170754 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:17:52.032189  170754 logs.go:123] Gathering logs for kube-apiserver [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8] ...
	I1026 15:17:52.032205  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:52.098688  170754 logs.go:123] Gathering logs for etcd [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922] ...
	I1026 15:17:52.098726  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:52.143993  170754 logs.go:123] Gathering logs for coredns [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de] ...
	I1026 15:17:52.144026  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:52.181622  170754 logs.go:123] Gathering logs for kube-scheduler [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148] ...
	I1026 15:17:52.181655  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:52.257644  170754 logs.go:123] Gathering logs for kube-scheduler [ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c] ...
	I1026 15:17:52.257680  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:52.296652  170754 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:17:52.296681  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:17:52.580982  170754 logs.go:123] Gathering logs for kubelet ...
	I1026 15:17:52.581013  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:17:52.672764  170754 logs.go:123] Gathering logs for etcd [7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9] ...
	I1026 15:17:52.672801  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	W1026 15:17:52.198631  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:17:54.200283  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:17:51.209683  178853 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1026 15:17:51.209848  178853 start.go:159] libmachine.API.Create for "embed-certs-163393" (driver="kvm2")
	I1026 15:17:51.209883  178853 client.go:168] LocalClient.Create starting
	I1026 15:17:51.209935  178853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem
	I1026 15:17:51.209970  178853 main.go:141] libmachine: Decoding PEM data...
	I1026 15:17:51.209982  178853 main.go:141] libmachine: Parsing certificate...
	I1026 15:17:51.210052  178853 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem
	I1026 15:17:51.210079  178853 main.go:141] libmachine: Decoding PEM data...
	I1026 15:17:51.210095  178853 main.go:141] libmachine: Parsing certificate...
	I1026 15:17:51.210390  178853 main.go:141] libmachine: creating domain...
	I1026 15:17:51.210403  178853 main.go:141] libmachine: creating network...
	I1026 15:17:51.211843  178853 main.go:141] libmachine: found existing default network
	I1026 15:17:51.212077  178853 main.go:141] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 15:17:51.213120  178853 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bc2c60}
	I1026 15:17:51.213192  178853 main.go:141] libmachine: defining private network:
	
	<network>
	  <name>mk-embed-certs-163393</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 15:17:51.218283  178853 main.go:141] libmachine: creating private network mk-embed-certs-163393 192.168.39.0/24...
	I1026 15:17:51.287179  178853 main.go:141] libmachine: private network mk-embed-certs-163393 192.168.39.0/24 created
	I1026 15:17:51.287491  178853 main.go:141] libmachine: <network>
	  <name>mk-embed-certs-163393</name>
	  <uuid>d35dbd72-8087-471b-8adf-d60064f596c2</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:ea:64:02'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1026 15:17:51.287523  178853 main.go:141] libmachine: setting up store path in /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393 ...
	I1026 15:17:51.287554  178853 main.go:141] libmachine: building disk image from file:///home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1026 15:17:51.287568  178853 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:17:51.287654  178853 main.go:141] libmachine: Downloading /home/jenkins/minikube-integration/21664-137233/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso...
	I1026 15:17:51.623041  178853 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa...
	I1026 15:17:51.910503  178853 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/embed-certs-163393.rawdisk...
	I1026 15:17:51.910549  178853 main.go:141] libmachine: Writing magic tar header
	I1026 15:17:51.910591  178853 main.go:141] libmachine: Writing SSH key tar header
	I1026 15:17:51.910713  178853 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393 ...
	I1026 15:17:51.910818  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393
	I1026 15:17:51.910872  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393 (perms=drwx------)
	I1026 15:17:51.910901  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube/machines
	I1026 15:17:51.910923  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube/machines (perms=drwxr-xr-x)
	I1026 15:17:51.910947  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:17:51.910962  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233/.minikube (perms=drwxr-xr-x)
	I1026 15:17:51.910979  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21664-137233
	I1026 15:17:51.911028  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21664-137233 (perms=drwxrwxr-x)
	I1026 15:17:51.911065  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1026 15:17:51.911084  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1026 15:17:51.911103  178853 main.go:141] libmachine: checking permissions on dir: /home/jenkins
	I1026 15:17:51.911134  178853 main.go:141] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1026 15:17:51.911155  178853 main.go:141] libmachine: checking permissions on dir: /home
	I1026 15:17:51.911172  178853 main.go:141] libmachine: skipping /home - not owner
	I1026 15:17:51.911183  178853 main.go:141] libmachine: defining domain...
	I1026 15:17:51.912826  178853 main.go:141] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>embed-certs-163393</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/embed-certs-163393.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-embed-certs-163393'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:17:51.927139  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:d6:fa:42 in network default
	I1026 15:17:51.928070  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:51.928094  178853 main.go:141] libmachine: starting domain...
	I1026 15:17:51.928102  178853 main.go:141] libmachine: ensuring networks are active...
	I1026 15:17:51.929107  178853 main.go:141] libmachine: Ensuring network default is active
	I1026 15:17:51.929723  178853 main.go:141] libmachine: Ensuring network mk-embed-certs-163393 is active
	I1026 15:17:51.930517  178853 main.go:141] libmachine: getting domain XML...
	I1026 15:17:51.931944  178853 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>embed-certs-163393</name>
	  <uuid>31c0eca2-26a4-41c9-a6df-d975a508de47</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/embed-certs-163393.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:bb:5d:75'/>
	      <source network='mk-embed-certs-163393'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d6:fa:42'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:17:53.405657  178853 main.go:141] libmachine: waiting for domain to start...
	I1026 15:17:53.407176  178853 main.go:141] libmachine: domain is now running
	I1026 15:17:53.407191  178853 main.go:141] libmachine: waiting for IP...
	I1026 15:17:53.408058  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:53.408911  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:53.408932  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:53.409391  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:53.409485  178853 retry.go:31] will retry after 224.378935ms: waiting for domain to come up
	I1026 15:17:53.636367  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:53.637347  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:53.637371  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:53.637830  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:53.637878  178853 retry.go:31] will retry after 370.25291ms: waiting for domain to come up
	I1026 15:17:54.009733  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:54.010685  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:54.010706  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:54.011127  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:54.011197  178853 retry.go:31] will retry after 386.092672ms: waiting for domain to come up
	I1026 15:17:54.398647  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:54.399486  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:54.399507  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:54.399943  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:54.399985  178853 retry.go:31] will retry after 586.427877ms: waiting for domain to come up
	I1026 15:17:54.987961  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:54.989040  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:54.989066  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:54.989529  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:54.989573  178853 retry.go:31] will retry after 576.503336ms: waiting for domain to come up
	I1026 15:17:55.567671  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:55.568615  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:55.568638  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:55.569117  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:55.569174  178853 retry.go:31] will retry after 890.583074ms: waiting for domain to come up
	I1026 15:17:56.439540  177820 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.669618565s
	I1026 15:17:57.141896  177820 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.373548346s
	I1026 15:17:55.231602  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:17:55.254037  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 15:17:55.254103  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 15:17:55.309992  170754 cri.go:89] found id: "c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:55.310025  170754 cri.go:89] found id: ""
	I1026 15:17:55.310036  170754 logs.go:282] 1 containers: [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8]
	I1026 15:17:55.310099  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.315732  170754 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 15:17:55.315799  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 15:17:55.362009  170754 cri.go:89] found id: "bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:55.362032  170754 cri.go:89] found id: "7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:55.362037  170754 cri.go:89] found id: ""
	I1026 15:17:55.362046  170754 logs.go:282] 2 containers: [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9]
	I1026 15:17:55.362112  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.367502  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.373256  170754 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 15:17:55.373343  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 15:17:55.417093  170754 cri.go:89] found id: "a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:55.417121  170754 cri.go:89] found id: ""
	I1026 15:17:55.417134  170754 logs.go:282] 1 containers: [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de]
	I1026 15:17:55.417208  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.422018  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 15:17:55.422091  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 15:17:55.470543  170754 cri.go:89] found id: "f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:55.470612  170754 cri.go:89] found id: "ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:55.470621  170754 cri.go:89] found id: ""
	I1026 15:17:55.470646  170754 logs.go:282] 2 containers: [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c]
	I1026 15:17:55.470733  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.475957  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.480694  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 15:17:55.480757  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 15:17:55.521988  170754 cri.go:89] found id: "4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:55.522016  170754 cri.go:89] found id: ""
	I1026 15:17:55.522028  170754 logs.go:282] 1 containers: [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f]
	I1026 15:17:55.522095  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.527353  170754 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 15:17:55.527430  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 15:17:55.570588  170754 cri.go:89] found id: "d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:55.570609  170754 cri.go:89] found id: ""
	I1026 15:17:55.570620  170754 logs.go:282] 1 containers: [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5]
	I1026 15:17:55.570695  170754 ssh_runner.go:195] Run: which crictl
	I1026 15:17:55.575654  170754 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 15:17:55.575745  170754 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 15:17:55.620226  170754 cri.go:89] found id: ""
	I1026 15:17:55.620260  170754 logs.go:282] 0 containers: []
	W1026 15:17:55.620281  170754 logs.go:284] No container was found matching "kindnet"
	I1026 15:17:55.620295  170754 logs.go:123] Gathering logs for kube-controller-manager [d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5] ...
	I1026 15:17:55.620322  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d554fbe20b15588666b6e80b64f0b06061139965ba51d8e9a17c3e6741e26eb5"
	I1026 15:17:55.673841  170754 logs.go:123] Gathering logs for CRI-O ...
	I1026 15:17:55.673881  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 15:17:56.013750  170754 logs.go:123] Gathering logs for dmesg ...
	I1026 15:17:56.013789  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 15:17:56.033300  170754 logs.go:123] Gathering logs for describe nodes ...
	I1026 15:17:56.033349  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 15:17:56.120012  170754 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 15:17:56.120039  170754 logs.go:123] Gathering logs for kube-apiserver [c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8] ...
	I1026 15:17:56.120056  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2eb59ce11415f4c48d08a8b45e2648e754062ced1418fd4eca5fe560261f0d8"
	I1026 15:17:56.198143  170754 logs.go:123] Gathering logs for kube-scheduler [f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148] ...
	I1026 15:17:56.198207  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f605e623b16c68aff5d3a7edacbe9493943f88f81f618ce23a4dc59180fa8148"
	I1026 15:17:56.294215  170754 logs.go:123] Gathering logs for container status ...
	I1026 15:17:56.294263  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 15:17:56.352990  170754 logs.go:123] Gathering logs for kubelet ...
	I1026 15:17:56.353021  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 15:17:56.441490  170754 logs.go:123] Gathering logs for etcd [bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922] ...
	I1026 15:17:56.441523  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb4299fbd87de3fa7285427701084c7eb64247af9c01979f2a77e504b49ae922"
	I1026 15:17:56.507047  170754 logs.go:123] Gathering logs for etcd [7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9] ...
	I1026 15:17:56.507098  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7b490bc45d498f921ac86eecf23d2e02d5f1762dbabbe70665921b8a617269f9"
	I1026 15:17:56.561486  170754 logs.go:123] Gathering logs for coredns [a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de] ...
	I1026 15:17:56.561533  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a829b93188eae9f7d8022317e73e6555da7294c8eacd4488e8a14341738a69de"
	I1026 15:17:56.601947  170754 logs.go:123] Gathering logs for kube-scheduler [ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c] ...
	I1026 15:17:56.601989  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae391e45391d3c752ce1d2eb3e25b24f9775855ea13aa9c77062e182dcf5cb4c"
	I1026 15:17:56.648854  170754 logs.go:123] Gathering logs for kube-proxy [4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f] ...
	I1026 15:17:56.648899  170754 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4517700a8fb8c777ec4e07b98cb1a392c6629f9dd19fdb0058f5650e82d81f5f"
	I1026 15:17:59.411216  177820 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.640492108s
	I1026 15:17:59.596071  177820 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:17:59.618015  177820 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:17:59.633901  177820 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:17:59.634187  177820 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-758002 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:17:59.649595  177820 kubeadm.go:318] [bootstrap-token] Using token: lwo38u.ix7if2n07d2aqidw
	W1026 15:17:56.202647  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:17:58.701803  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:17:59.650769  177820 out.go:252]   - Configuring RBAC rules ...
	I1026 15:17:59.650913  177820 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:17:59.673874  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:17:59.689322  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:17:59.696082  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:17:59.705622  177820 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:17:59.709899  177820 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:17:59.818884  177820 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:00.272272  177820 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:00.818848  177820 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:00.819729  177820 kubeadm.go:318] 
	I1026 15:18:00.819825  177820 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:00.819840  177820 kubeadm.go:318] 
	I1026 15:18:00.819936  177820 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:00.819950  177820 kubeadm.go:318] 
	I1026 15:18:00.820021  177820 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:00.820126  177820 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:00.820226  177820 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:00.820238  177820 kubeadm.go:318] 
	I1026 15:18:00.820313  177820 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:00.820323  177820 kubeadm.go:318] 
	I1026 15:18:00.820386  177820 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:00.820401  177820 kubeadm.go:318] 
	I1026 15:18:00.820522  177820 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:00.820651  177820 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:00.820759  177820 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:00.820775  177820 kubeadm.go:318] 
	I1026 15:18:00.820885  177820 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:00.821004  177820 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:00.821023  177820 kubeadm.go:318] 
	I1026 15:18:00.821143  177820 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lwo38u.ix7if2n07d2aqidw \
	I1026 15:18:00.821298  177820 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 15:18:00.821324  177820 kubeadm.go:318] 	--control-plane 
	I1026 15:18:00.821329  177820 kubeadm.go:318] 
	I1026 15:18:00.821486  177820 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:00.821505  177820 kubeadm.go:318] 
	I1026 15:18:00.821609  177820 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lwo38u.ix7if2n07d2aqidw \
	I1026 15:18:00.821744  177820 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 15:18:00.822835  177820 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:00.822865  177820 cni.go:84] Creating CNI manager for ""
	I1026 15:18:00.822875  177820 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:00.824297  177820 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:17:56.461810  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:56.462883  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:56.462910  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:56.463427  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:56.463504  178853 retry.go:31] will retry after 740.368024ms: waiting for domain to come up
	I1026 15:17:57.205445  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:57.206382  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:57.206407  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:57.206864  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:57.206917  178853 retry.go:31] will retry after 1.267858294s: waiting for domain to come up
	I1026 15:17:58.476314  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:17:58.477112  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:17:58.477129  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:17:58.477577  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:17:58.477632  178853 retry.go:31] will retry after 1.679056083s: waiting for domain to come up
	I1026 15:18:00.158806  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:00.159519  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:18:00.159538  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:18:00.159928  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:18:00.159974  178853 retry.go:31] will retry after 2.179695277s: waiting for domain to come up
	I1026 15:18:00.825478  177820 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:18:00.838309  177820 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:18:00.859563  177820 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:00.859662  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:00.859715  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-758002 minikube.k8s.io/updated_at=2025_10_26T15_18_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=no-preload-758002 minikube.k8s.io/primary=true
	I1026 15:18:00.917438  177820 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:01.014804  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:01.515750  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:02.015701  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:02.515699  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:17:59.187857  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:17:59.208023  170754 kubeadm.go:601] duration metric: took 4m3.266578421s to restartPrimaryControlPlane
	W1026 15:17:59.208109  170754 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1026 15:17:59.208180  170754 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1026 15:18:02.046399  170754 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.838193207s)
	I1026 15:18:02.046485  170754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:02.067669  170754 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:18:02.085002  170754 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:18:02.101237  170754 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:18:02.101276  170754 kubeadm.go:157] found existing configuration files:
	
	I1026 15:18:02.101339  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:18:02.114384  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:18:02.114485  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:18:02.126396  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:18:02.142701  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:18:02.142792  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:18:02.155147  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:18:02.169189  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:18:02.169273  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:18:02.186883  170754 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:18:02.201291  170754 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:18:02.201381  170754 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:18:02.220302  170754 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 15:18:02.379018  170754 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:03.014836  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:03.515501  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:04.015755  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:04.515602  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:05.015474  177820 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:05.102329  177820 kubeadm.go:1113] duration metric: took 4.24274723s to wait for elevateKubeSystemPrivileges
	I1026 15:18:05.102388  177820 kubeadm.go:402] duration metric: took 16.81057756s to StartCluster
	I1026 15:18:05.102418  177820 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:05.102533  177820 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:18:05.103801  177820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:05.104090  177820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 15:18:05.104111  177820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:05.104222  177820 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:05.104314  177820 addons.go:69] Setting storage-provisioner=true in profile "no-preload-758002"
	I1026 15:18:05.104341  177820 addons.go:238] Setting addon storage-provisioner=true in "no-preload-758002"
	I1026 15:18:05.104345  177820 addons.go:69] Setting default-storageclass=true in profile "no-preload-758002"
	I1026 15:18:05.104363  177820 config.go:182] Loaded profile config "no-preload-758002": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:05.104372  177820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-758002"
	I1026 15:18:05.104377  177820 host.go:66] Checking if "no-preload-758002" exists ...
	I1026 15:18:05.106028  177820 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:05.107294  177820 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1026 15:18:01.199409  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:03.199971  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:05.201417  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:02.342116  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:02.343072  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:18:02.343096  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:18:02.343607  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:18:02.343666  178853 retry.go:31] will retry after 2.620685962s: waiting for domain to come up
	I1026 15:18:04.966947  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:04.967706  178853 main.go:141] libmachine: no network interface addresses found for domain embed-certs-163393 (source=lease)
	I1026 15:18:04.967730  178853 main.go:141] libmachine: trying to list again with source=arp
	I1026 15:18:04.968173  178853 main.go:141] libmachine: unable to find current IP address of domain embed-certs-163393 in network mk-embed-certs-163393 (interfaces detected: [])
	I1026 15:18:04.968232  178853 retry.go:31] will retry after 2.927688766s: waiting for domain to come up
	I1026 15:18:05.107313  177820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:05.108568  177820 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:05.108589  177820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:18:05.109032  177820 addons.go:238] Setting addon default-storageclass=true in "no-preload-758002"
	I1026 15:18:05.109083  177820 host.go:66] Checking if "no-preload-758002" exists ...
	I1026 15:18:05.112160  177820 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:05.112183  177820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:18:05.114418  177820 main.go:141] libmachine: domain no-preload-758002 has defined MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.115313  177820 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:29:ca", ip: ""} in network mk-no-preload-758002: {Iface:virbr2 ExpiryTime:2025-10-26 16:17:22 +0000 UTC Type:0 Mac:52:54:00:4b:29:ca Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:no-preload-758002 Clientid:01:52:54:00:4b:29:ca}
	I1026 15:18:05.115356  177820 main.go:141] libmachine: domain no-preload-758002 has defined IP address 192.168.50.112 and MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.115835  177820 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/no-preload-758002/id_rsa Username:docker}
	I1026 15:18:05.117320  177820 main.go:141] libmachine: domain no-preload-758002 has defined MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.117928  177820 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4b:29:ca", ip: ""} in network mk-no-preload-758002: {Iface:virbr2 ExpiryTime:2025-10-26 16:17:22 +0000 UTC Type:0 Mac:52:54:00:4b:29:ca Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:no-preload-758002 Clientid:01:52:54:00:4b:29:ca}
	I1026 15:18:05.117969  177820 main.go:141] libmachine: domain no-preload-758002 has defined IP address 192.168.50.112 and MAC address 52:54:00:4b:29:ca in network mk-no-preload-758002
	I1026 15:18:05.118214  177820 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/no-preload-758002/id_rsa Username:docker}
	I1026 15:18:05.431950  177820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 15:18:05.553934  177820 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:05.830488  177820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:18:05.854226  177820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:18:06.070410  177820 start.go:976] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1026 15:18:06.071921  177820 node_ready.go:35] waiting up to 6m0s for node "no-preload-758002" to be "Ready" ...
	I1026 15:18:06.096721  177820 node_ready.go:49] node "no-preload-758002" is "Ready"
	I1026 15:18:06.096760  177820 node_ready.go:38] duration metric: took 24.803858ms for node "no-preload-758002" to be "Ready" ...
	I1026 15:18:06.096778  177820 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:06.096857  177820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:06.382308  177820 api_server.go:72] duration metric: took 1.278142223s to wait for apiserver process to appear ...
	I1026 15:18:06.382343  177820 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:06.382367  177820 api_server.go:253] Checking apiserver healthz at https://192.168.50.112:8443/healthz ...
	I1026 15:18:06.383727  177820 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 15:18:06.384867  177820 addons.go:514] duration metric: took 1.280668718s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 15:18:06.391351  177820 api_server.go:279] https://192.168.50.112:8443/healthz returned 200:
	ok
	I1026 15:18:06.392537  177820 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:06.392562  177820 api_server.go:131] duration metric: took 10.211098ms to wait for apiserver health ...
	I1026 15:18:06.392572  177820 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:06.398409  177820 system_pods.go:59] 8 kube-system pods found
	I1026 15:18:06.398440  177820 system_pods.go:61] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.398448  177820 system_pods.go:61] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.398477  177820 system_pods.go:61] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.398486  177820 system_pods.go:61] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.398494  177820 system_pods.go:61] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.398506  177820 system_pods.go:61] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.398514  177820 system_pods.go:61] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.398523  177820 system_pods.go:61] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending
	I1026 15:18:06.398533  177820 system_pods.go:74] duration metric: took 5.953806ms to wait for pod list to return data ...
	I1026 15:18:06.398544  177820 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:06.402139  177820 default_sa.go:45] found service account: "default"
	I1026 15:18:06.402161  177820 default_sa.go:55] duration metric: took 3.61016ms for default service account to be created ...
	I1026 15:18:06.402186  177820 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:06.405249  177820 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:06.405274  177820 system_pods.go:89] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.405281  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.405288  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.405295  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.405300  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.405307  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.405314  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.405323  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:18:06.405338  177820 retry.go:31] will retry after 188.566274ms: missing components: kube-dns, kube-proxy
	I1026 15:18:06.575702  177820 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-758002" context rescaled to 1 replicas
	I1026 15:18:06.598820  177820 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:06.598869  177820 system_pods.go:89] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.598882  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.598896  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.598909  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.598918  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.598926  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.598935  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.598944  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:18:06.598975  177820 retry.go:31] will retry after 254.88108ms: missing components: kube-dns, kube-proxy
	I1026 15:18:06.858628  177820 system_pods.go:86] 8 kube-system pods found
	I1026 15:18:06.858675  177820 system_pods.go:89] "coredns-66bc5c9577-nmrz8" [647c86a6-d58e-42e6-9833-493a71e3fb88] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.858686  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:06.858696  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:06.858705  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:06.858714  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:06.858732  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:06.858744  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:06.858755  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:18:06.858778  177820 retry.go:31] will retry after 476.19811ms: missing components: kube-dns, kube-proxy
	I1026 15:18:07.339415  177820 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:07.339474  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:07.339491  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:07.339503  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:07.339511  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:07.339520  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:07.339528  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:07.339534  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Running
	I1026 15:18:07.339554  177820 retry.go:31] will retry after 432.052198ms: missing components: kube-dns, kube-proxy
	I1026 15:18:07.777911  177820 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:07.777977  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:07.777988  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:07.778002  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:07.778015  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:07.778024  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:07.778043  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:07.778049  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Running
	I1026 15:18:07.778069  177820 retry.go:31] will retry after 696.721573ms: missing components: kube-dns, kube-proxy
	W1026 15:18:07.201562  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:09.698745  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:07.897885  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:07.899209  178853 main.go:141] libmachine: domain embed-certs-163393 has current primary IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:07.899248  178853 main.go:141] libmachine: found domain IP: 192.168.39.103
	I1026 15:18:07.899260  178853 main.go:141] libmachine: reserving static IP address...
	I1026 15:18:07.899781  178853 main.go:141] libmachine: unable to find host DHCP lease matching {name: "embed-certs-163393", mac: "52:54:00:bb:5d:75", ip: "192.168.39.103"} in network mk-embed-certs-163393
	I1026 15:18:08.108414  178853 main.go:141] libmachine: reserved static IP address 192.168.39.103 for domain embed-certs-163393
	I1026 15:18:08.108438  178853 main.go:141] libmachine: waiting for SSH...
	I1026 15:18:08.108444  178853 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:18:08.112076  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.112657  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.112701  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.112914  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.113305  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.113321  178853 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:18:08.234042  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:08.234519  178853 main.go:141] libmachine: domain creation complete
	I1026 15:18:08.236407  178853 machine.go:93] provisionDockerMachine start ...
	I1026 15:18:08.239152  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.239638  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.239671  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.239924  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.240165  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.240179  178853 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:18:08.359836  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:18:08.359873  178853 buildroot.go:166] provisioning hostname "embed-certs-163393"
	I1026 15:18:08.363827  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.364479  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.364523  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.364782  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.365068  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.365094  178853 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-163393 && echo "embed-certs-163393" | sudo tee /etc/hostname
	I1026 15:18:08.508692  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-163393
	
	I1026 15:18:08.512839  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.513430  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.513471  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.513717  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:08.514015  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:08.514043  178853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-163393' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-163393/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-163393' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:18:08.642566  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:18:08.642599  178853 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:18:08.642632  178853 buildroot.go:174] setting up certificates
	I1026 15:18:08.642650  178853 provision.go:84] configureAuth start
	I1026 15:18:08.645882  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.646333  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.646360  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.648965  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.649438  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:08.649473  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:08.649631  178853 provision.go:143] copyHostCerts
	I1026 15:18:08.649702  178853 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:18:08.649723  178853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:18:08.649833  178853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:18:08.649954  178853 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:18:08.649963  178853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:18:08.649995  178853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:18:08.650076  178853 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:18:08.650084  178853 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:18:08.650108  178853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:18:08.650170  178853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.embed-certs-163393 san=[127.0.0.1 192.168.39.103 embed-certs-163393 localhost minikube]
	I1026 15:18:09.206370  178853 provision.go:177] copyRemoteCerts
	I1026 15:18:09.206435  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:18:09.209544  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.210016  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.210042  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.210207  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:09.301100  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:18:09.336044  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1026 15:18:09.368637  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 15:18:09.402330  178853 provision.go:87] duration metric: took 759.662052ms to configureAuth
	I1026 15:18:09.402359  178853 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:18:09.402622  178853 config.go:182] Loaded profile config "embed-certs-163393": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:09.405912  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.406391  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.406424  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.406664  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:09.406876  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:09.406893  178853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:18:09.668208  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:18:09.668239  178853 machine.go:96] duration metric: took 1.431810212s to provisionDockerMachine
	I1026 15:18:09.668253  178853 client.go:171] duration metric: took 18.458362485s to LocalClient.Create
	I1026 15:18:09.668275  178853 start.go:167] duration metric: took 18.458425077s to libmachine.API.Create "embed-certs-163393"
	I1026 15:18:09.668284  178853 start.go:293] postStartSetup for "embed-certs-163393" (driver="kvm2")
	I1026 15:18:09.668297  178853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:18:09.668373  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:18:09.671598  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.672035  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.672063  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.672202  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:09.763653  178853 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:18:09.769383  178853 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:18:09.769425  178853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:18:09.769505  178853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:18:09.769600  178853 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:18:09.769741  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:18:09.787891  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:18:09.823649  178853 start.go:296] duration metric: took 155.345554ms for postStartSetup
	I1026 15:18:09.826848  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.827204  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.827233  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.827442  178853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/config.json ...
	I1026 15:18:09.827629  178853 start.go:128] duration metric: took 18.619193814s to createHost
	I1026 15:18:09.829867  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.830224  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.830244  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.830379  178853 main.go:141] libmachine: Using SSH client type: native
	I1026 15:18:09.830611  178853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1026 15:18:09.830621  178853 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:18:09.943444  178853 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761491889.896325088
	
	I1026 15:18:09.943496  178853 fix.go:216] guest clock: 1761491889.896325088
	I1026 15:18:09.943504  178853 fix.go:229] Guest: 2025-10-26 15:18:09.896325088 +0000 UTC Remote: 2025-10-26 15:18:09.827641672 +0000 UTC m=+18.731334028 (delta=68.683416ms)
	I1026 15:18:09.943521  178853 fix.go:200] guest clock delta is within tolerance: 68.683416ms
	I1026 15:18:09.943526  178853 start.go:83] releasing machines lock for "embed-certs-163393", held for 18.73515759s
	I1026 15:18:09.946765  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.947208  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.947242  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.947793  178853 ssh_runner.go:195] Run: cat /version.json
	I1026 15:18:09.947856  178853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:18:09.950746  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951062  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951265  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.951295  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951485  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:09.951628  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:09.951663  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:09.951813  178853 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/embed-certs-163393/id_rsa Username:docker}
	I1026 15:18:10.059000  178853 ssh_runner.go:195] Run: systemctl --version
	I1026 15:18:10.067022  178853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:18:10.219313  178853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:18:10.225970  178853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:18:10.226058  178853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:18:10.245615  178853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:18:10.245648  178853 start.go:495] detecting cgroup driver to use...
	I1026 15:18:10.245731  178853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:18:10.263804  178853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:18:10.280269  178853 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:18:10.280344  178853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:18:10.298076  178853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:18:10.312935  178853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:18:10.469319  178853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:18:10.701535  178853 docker.go:234] disabling docker service ...
	I1026 15:18:10.701621  178853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:18:10.720130  178853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:18:10.735741  178853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:18:10.912675  178853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:18:11.060726  178853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:18:11.077148  178853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:18:11.099487  178853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:18:11.099560  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.113343  178853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:18:11.113413  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.127886  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.141518  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.406388  170754 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:18:11.406516  170754 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:18:11.406634  170754 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:18:11.406771  170754 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:18:11.406929  170754 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:18:11.407054  170754 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:18:11.408925  170754 out.go:252]   - Generating certificates and keys ...
	I1026 15:18:11.409140  170754 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:18:11.409601  170754 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:18:11.409728  170754 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1026 15:18:11.409826  170754 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1026 15:18:11.409930  170754 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1026 15:18:11.410096  170754 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1026 15:18:11.410211  170754 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1026 15:18:11.410341  170754 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1026 15:18:11.410529  170754 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1026 15:18:11.410695  170754 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1026 15:18:11.410773  170754 kubeadm.go:318] [certs] Using the existing "sa" key
	I1026 15:18:11.410868  170754 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:18:11.410938  170754 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:18:11.411037  170754 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:18:11.411110  170754 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:18:11.411196  170754 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:18:11.411284  170754 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:18:11.411413  170754 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:18:11.411545  170754 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 15:18:11.412975  170754 out.go:252]   - Booting up control plane ...
	I1026 15:18:11.413106  170754 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:11.413220  170754 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:11.413325  170754 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:11.413490  170754 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:11.413623  170754 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:11.413769  170754 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:11.413888  170754 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:11.413946  170754 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:11.414133  170754 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:11.414294  170754 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:11.414382  170754 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001124481s
	I1026 15:18:11.414524  170754 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:11.414631  170754 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.72.175:8443/livez
	I1026 15:18:11.414797  170754 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:11.414930  170754 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:11.415032  170754 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.179109967s
	I1026 15:18:11.415134  170754 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.430934484s
	I1026 15:18:11.415224  170754 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.001879134s
	I1026 15:18:11.415398  170754 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:11.415622  170754 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:11.415726  170754 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:11.416015  170754 kubeadm.go:318] [mark-control-plane] Marking the node pause-750553 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:11.416093  170754 kubeadm.go:318] [bootstrap-token] Using token: 67vbze.ccs7edufrsqva8ht
	I1026 15:18:11.417280  170754 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:11.417404  170754 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:11.417540  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:11.417710  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:11.417902  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:11.418025  170754 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:11.418102  170754 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:11.418230  170754 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:11.418283  170754 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:11.418352  170754 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:11.418365  170754 kubeadm.go:318] 
	I1026 15:18:11.418441  170754 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:11.418475  170754 kubeadm.go:318] 
	I1026 15:18:11.418600  170754 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:11.418612  170754 kubeadm.go:318] 
	I1026 15:18:11.418648  170754 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:11.418733  170754 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:11.418803  170754 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:11.418815  170754 kubeadm.go:318] 
	I1026 15:18:11.418891  170754 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:11.418908  170754 kubeadm.go:318] 
	I1026 15:18:11.418982  170754 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:11.418988  170754 kubeadm.go:318] 
	I1026 15:18:11.419075  170754 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:11.419194  170754 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:11.419303  170754 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:11.419316  170754 kubeadm.go:318] 
	I1026 15:18:11.419417  170754 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:11.419559  170754 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:11.419570  170754 kubeadm.go:318] 
	I1026 15:18:11.419679  170754 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 67vbze.ccs7edufrsqva8ht \
	I1026 15:18:11.419850  170754 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 15:18:11.419881  170754 kubeadm.go:318] 	--control-plane 
	I1026 15:18:11.419892  170754 kubeadm.go:318] 
	I1026 15:18:11.420003  170754 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:11.420019  170754 kubeadm.go:318] 
	I1026 15:18:11.420132  170754 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 67vbze.ccs7edufrsqva8ht \
	I1026 15:18:11.420283  170754 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 15:18:11.420312  170754 cni.go:84] Creating CNI manager for ""
	I1026 15:18:11.420322  170754 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:11.421589  170754 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:18:11.155262  178853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:18:11.167436  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.179905  178853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.200259  178853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:18:11.212634  178853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:18:11.223171  178853 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:18:11.223224  178853 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:18:11.245165  178853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:18:11.257714  178853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:11.419386  178853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:18:11.555859  178853 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:18:11.555959  178853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:18:11.561593  178853 start.go:563] Will wait 60s for crictl version
	I1026 15:18:11.561676  178853 ssh_runner.go:195] Run: which crictl
	I1026 15:18:11.565500  178853 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:18:11.605904  178853 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:18:11.605997  178853 ssh_runner.go:195] Run: crio --version
	I1026 15:18:11.639831  178853 ssh_runner.go:195] Run: crio --version
	I1026 15:18:11.676581  178853 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:18:08.479112  177820 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:08.479146  177820 system_pods.go:89] "coredns-66bc5c9577-sqsf7" [429d6d75-2369-4188-956a-142f6d765274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:08.479158  177820 system_pods.go:89] "etcd-no-preload-758002" [1b6ec061-86d0-4411-a511-0e276db433d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:08.479169  177820 system_pods.go:89] "kube-apiserver-no-preload-758002" [c1f22439-f72c-4c73-808e-d4502022e8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:08.479177  177820 system_pods.go:89] "kube-controller-manager-no-preload-758002" [5306be76-be30-41ec-a953-9f918dd7d637] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:08.479183  177820 system_pods.go:89] "kube-proxy-zdr6t" [b7e3edbf-a798-4f1c-9aef-307604d6c671] Running
	I1026 15:18:08.479190  177820 system_pods.go:89] "kube-scheduler-no-preload-758002" [72acbd0d-ce7a-4519-a801-d194dcd80b61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:08.479196  177820 system_pods.go:89] "storage-provisioner" [ba9fc411-41a8-4ba5-b162-d63806dd7a16] Running
	I1026 15:18:08.479217  177820 system_pods.go:126] duration metric: took 2.077014595s to wait for k8s-apps to be running ...
	I1026 15:18:08.479231  177820 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:08.479288  177820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:08.499611  177820 system_svc.go:56] duration metric: took 20.370764ms WaitForService to wait for kubelet
	I1026 15:18:08.499644  177820 kubeadm.go:586] duration metric: took 3.395489547s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:08.499662  177820 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:08.503525  177820 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:18:08.503563  177820 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:08.503581  177820 node_conditions.go:105] duration metric: took 3.913686ms to run NodePressure ...
	I1026 15:18:08.503596  177820 start.go:241] waiting for startup goroutines ...
	I1026 15:18:08.503606  177820 start.go:246] waiting for cluster config update ...
	I1026 15:18:08.503626  177820 start.go:255] writing updated cluster config ...
	I1026 15:18:08.503959  177820 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:08.510701  177820 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:08.515655  177820 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sqsf7" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:10.524590  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:11.422571  170754 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:18:11.443884  170754 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:18:11.489694  170754 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:11.489800  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:11.489839  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes pause-750553 minikube.k8s.io/updated_at=2025_10_26T15_18_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=pause-750553 minikube.k8s.io/primary=true
	I1026 15:18:11.650540  170754 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:11.650684  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:12.150825  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:12.651258  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:13.150904  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1026 15:18:11.698977  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	W1026 15:18:13.699502  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:11.680507  178853 main.go:141] libmachine: domain embed-certs-163393 has defined MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:11.681031  178853 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bb:5d:75", ip: ""} in network mk-embed-certs-163393: {Iface:virbr1 ExpiryTime:2025-10-26 16:18:07 +0000 UTC Type:0 Mac:52:54:00:bb:5d:75 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:embed-certs-163393 Clientid:01:52:54:00:bb:5d:75}
	I1026 15:18:11.681072  178853 main.go:141] libmachine: domain embed-certs-163393 has defined IP address 192.168.39.103 and MAC address 52:54:00:bb:5d:75 in network mk-embed-certs-163393
	I1026 15:18:11.681337  178853 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1026 15:18:11.686204  178853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:11.703014  178853 kubeadm.go:883] updating cluster {Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:18:11.703130  178853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:18:11.703175  178853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:11.740765  178853 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:18:11.740837  178853 ssh_runner.go:195] Run: which lz4
	I1026 15:18:11.745258  178853 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:18:11.750220  178853 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:18:11.750270  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 15:18:13.211048  178853 crio.go:462] duration metric: took 1.465833967s to copy over tarball
	I1026 15:18:13.211132  178853 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:18:14.868710  178853 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.657537205s)
	I1026 15:18:14.868739  178853 crio.go:469] duration metric: took 1.657660446s to extract the tarball
	I1026 15:18:14.868746  178853 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:18:14.910498  178853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:18:14.952967  178853 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:18:14.952994  178853 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:18:14.953003  178853 kubeadm.go:934] updating node { 192.168.39.103 8443 v1.34.1 crio true true} ...
	I1026 15:18:14.953100  178853 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-163393 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:18:14.953179  178853 ssh_runner.go:195] Run: crio config
	I1026 15:18:15.000853  178853 cni.go:84] Creating CNI manager for ""
	I1026 15:18:15.000882  178853 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:15.000902  178853 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 15:18:15.000925  178853 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-163393 NodeName:embed-certs-163393 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:18:15.001061  178853 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-163393"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.103"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:18:15.001137  178853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:18:15.013227  178853 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:18:15.013306  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:18:15.024987  178853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1026 15:18:15.046596  178853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:18:15.066440  178853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1026 15:18:15.088110  178853 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I1026 15:18:15.092193  178853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:18:15.106183  178853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:15.252263  178853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:15.272696  178853 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393 for IP: 192.168.39.103
	I1026 15:18:15.272723  178853 certs.go:195] generating shared ca certs ...
	I1026 15:18:15.272747  178853 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.272953  178853 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:18:15.273048  178853 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:18:15.273072  178853 certs.go:257] generating profile certs ...
	I1026 15:18:15.273156  178853 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.key
	I1026 15:18:15.273182  178853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.crt with IP's: []
	I1026 15:18:15.379843  178853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.crt ...
	I1026 15:18:15.379878  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.crt: {Name:mk5da6a5a1fc7e75e614932409f60fb9762a0166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.380065  178853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.key ...
	I1026 15:18:15.380077  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/client.key: {Name:mkf872e3e0d0cb7b05c86f855281eddc4679f1da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.380154  178853 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59
	I1026 15:18:15.380169  178853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103]
	I1026 15:18:15.699094  178853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59 ...
	I1026 15:18:15.699130  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59: {Name:mkc706865f91fcd7025cc2a28277beb6ca475281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.699349  178853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59 ...
	I1026 15:18:15.699375  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59: {Name:mk4320a3c4fce39f17ee887c1fbe61aad1c9704e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.699543  178853 certs.go:382] copying /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt.df7b1e59 -> /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt
	I1026 15:18:15.699649  178853 certs.go:386] copying /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key.df7b1e59 -> /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key
	I1026 15:18:15.699749  178853 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key
	I1026 15:18:15.699774  178853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt with IP's: []
	I1026 15:18:15.977029  178853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt ...
	I1026 15:18:15.977071  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt: {Name:mkec8a558b297e96f1f00ed264aad0379456c2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.977282  178853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key ...
	I1026 15:18:15.977302  178853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key: {Name:mk09f0e9bc803db49e88aa8d09e85d4d23fe2fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:15.977557  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:18:15.977616  178853 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:18:15.977632  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:18:15.977670  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:18:15.977711  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:18:15.977750  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:18:15.977824  178853 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:18:15.978500  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:18:16.017198  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:18:16.052335  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:18:16.082191  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:18:16.112650  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 15:18:16.143032  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 15:18:16.178952  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:18:16.210684  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/embed-certs-163393/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:18:16.251927  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:18:16.285150  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:18:16.320115  178853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:18:16.350222  178853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:18:16.371943  178853 ssh_runner.go:195] Run: openssl version
	I1026 15:18:16.378048  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:18:16.392830  178853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:18:16.397966  178853 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:18:16.398028  178853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:18:16.405383  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:18:16.418616  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:18:16.431923  178853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:18:16.437188  178853 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:18:16.437254  178853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:18:16.444195  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:18:16.457103  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:18:16.469825  178853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:16.474716  178853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:16.474769  178853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:18:16.481766  178853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:18:16.498345  178853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:18:16.503955  178853 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 15:18:16.504023  178853 kubeadm.go:400] StartCluster: {Name:embed-certs-163393 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:embed-certs-163393 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:18:16.504124  178853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:18:16.504189  178853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:18:16.549983  178853 cri.go:89] found id: ""
	I1026 15:18:16.550087  178853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:18:16.563167  178853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:18:16.576038  178853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:18:16.591974  178853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:18:16.591995  178853 kubeadm.go:157] found existing configuration files:
	
	I1026 15:18:16.592071  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:18:16.606260  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:18:16.606357  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:18:16.621665  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:18:16.634690  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:18:16.634794  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:18:16.648108  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:18:16.661844  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:18:16.661906  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:18:16.675279  178853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:18:16.687770  178853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:18:16.687854  178853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:18:16.704923  178853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1026 15:18:16.770011  178853 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 15:18:16.770071  178853 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 15:18:16.861261  178853 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 15:18:16.861404  178853 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 15:18:16.861550  178853 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 15:18:16.875045  178853 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 15:18:13.651733  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:14.151352  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:14.651308  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:15.151690  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:15.651712  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:16.151207  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:16.651180  170754 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:17.006229  170754 kubeadm.go:1113] duration metric: took 5.516499533s to wait for elevateKubeSystemPrivileges
	I1026 15:18:17.006271  170754 kubeadm.go:402] duration metric: took 4m21.183330721s to StartCluster
	I1026 15:18:17.006293  170754 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:17.006400  170754 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:18:17.008320  170754 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:18:17.080354  170754 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.175 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:18:17.080472  170754 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:18:17.080672  170754 config.go:182] Loaded profile config "pause-750553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:18:17.119686  170754 out.go:179] * Verifying Kubernetes components...
	I1026 15:18:17.119691  170754 out.go:179] * Enabled addons: 
	W1026 15:18:13.023603  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:15.522717  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:17.188959  170754 addons.go:514] duration metric: took 108.487064ms for enable addons: enabled=[]
	I1026 15:18:17.189008  170754 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:18:17.373798  170754 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:18:17.394714  170754 node_ready.go:35] waiting up to 6m0s for node "pause-750553" to be "Ready" ...
	I1026 15:18:18.205474  170754 node_ready.go:49] node "pause-750553" is "Ready"
	I1026 15:18:18.205509  170754 node_ready.go:38] duration metric: took 810.745524ms for node "pause-750553" to be "Ready" ...
	I1026 15:18:18.205528  170754 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:18:18.205594  170754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:18:18.239724  170754 api_server.go:72] duration metric: took 1.159313338s to wait for apiserver process to appear ...
	I1026 15:18:18.239759  170754 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:18:18.239780  170754 api_server.go:253] Checking apiserver healthz at https://192.168.72.175:8443/healthz ...
	W1026 15:18:16.201574  176942 pod_ready.go:104] pod "coredns-5dd5756b68-46566" is not "Ready", error: <nil>
	I1026 15:18:18.700337  176942 pod_ready.go:94] pod "coredns-5dd5756b68-46566" is "Ready"
	I1026 15:18:18.700370  176942 pod_ready.go:86] duration metric: took 38.007455801s for pod "coredns-5dd5756b68-46566" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.700382  176942 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.702801  176942 pod_ready.go:99] pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace is gone: getting pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace (will retry): pods "coredns-5dd5756b68-6wbnw" not found
	I1026 15:18:18.702822  176942 pod_ready.go:86] duration metric: took 2.431905ms for pod "coredns-5dd5756b68-6wbnw" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.707225  176942 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.713493  176942 pod_ready.go:94] pod "etcd-old-k8s-version-065983" is "Ready"
	I1026 15:18:18.713533  176942 pod_ready.go:86] duration metric: took 6.2848ms for pod "etcd-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.718250  176942 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.724818  176942 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-065983" is "Ready"
	I1026 15:18:18.724848  176942 pod_ready.go:86] duration metric: took 6.569944ms for pod "kube-apiserver-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:18.727781  176942 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.096810  176942 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-065983" is "Ready"
	I1026 15:18:19.096843  176942 pod_ready.go:86] duration metric: took 369.033655ms for pod "kube-controller-manager-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.299181  176942 pod_ready.go:83] waiting for pod "kube-proxy-bs4p4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.696333  176942 pod_ready.go:94] pod "kube-proxy-bs4p4" is "Ready"
	I1026 15:18:19.696365  176942 pod_ready.go:86] duration metric: took 397.149805ms for pod "kube-proxy-bs4p4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:19.897834  176942 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.296898  176942 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-065983" is "Ready"
	I1026 15:18:20.296932  176942 pod_ready.go:86] duration metric: took 399.056756ms for pod "kube-scheduler-old-k8s-version-065983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.296945  176942 pod_ready.go:40] duration metric: took 39.608901275s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:20.341895  176942 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1026 15:18:20.343471  176942 out.go:203] 
	W1026 15:18:20.344571  176942 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1026 15:18:20.345531  176942 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1026 15:18:20.346709  176942 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-065983" cluster and "default" namespace by default
	I1026 15:18:16.929950  178853 out.go:252]   - Generating certificates and keys ...
	I1026 15:18:16.930121  178853 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 15:18:16.930246  178853 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 15:18:17.275036  178853 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 15:18:17.424652  178853 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 15:18:17.694323  178853 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 15:18:17.760202  178853 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 15:18:18.278551  178853 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 15:18:18.278696  178853 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-163393 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I1026 15:18:18.594369  178853 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 15:18:18.594695  178853 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-163393 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I1026 15:18:19.245169  178853 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 15:18:19.641449  178853 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 15:18:19.986891  178853 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 15:18:19.986956  178853 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 15:18:20.043176  178853 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 15:18:20.511025  178853 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 15:18:20.810927  178853 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 15:18:21.135955  178853 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 15:18:21.353393  178853 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 15:18:21.354093  178853 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 15:18:21.356485  178853 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1026 15:18:18.497737  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:20.521956  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:22.523790  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:18.669171  170754 api_server.go:279] https://192.168.72.175:8443/healthz returned 200:
	ok
	I1026 15:18:18.672092  170754 api_server.go:141] control plane version: v1.34.1
	I1026 15:18:18.672158  170754 api_server.go:131] duration metric: took 432.389369ms to wait for apiserver health ...
	I1026 15:18:18.672171  170754 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:18:18.694765  170754 system_pods.go:59] 7 kube-system pods found
	I1026 15:18:18.694803  170754 system_pods.go:61] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending
	I1026 15:18:18.694811  170754 system_pods.go:61] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending
	I1026 15:18:18.694824  170754 system_pods.go:61] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:18.694833  170754 system_pods.go:61] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:18.694844  170754 system_pods.go:61] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:18.694853  170754 system_pods.go:61] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:18.694860  170754 system_pods.go:61] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:18.694872  170754 system_pods.go:74] duration metric: took 22.693932ms to wait for pod list to return data ...
	I1026 15:18:18.694885  170754 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:18:18.707135  170754 default_sa.go:45] found service account: "default"
	I1026 15:18:18.707160  170754 default_sa.go:55] duration metric: took 12.263129ms for default service account to be created ...
	I1026 15:18:18.707171  170754 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:18:18.722583  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:18.722623  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:18.722631  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending
	I1026 15:18:18.722641  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:18:18.722649  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:18.722658  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:18.722670  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:18:18.722677  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:18:18.722710  170754 retry.go:31] will retry after 192.605388ms: missing components: kube-dns, kube-proxy
	I1026 15:18:18.920102  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:18.920132  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:18.920140  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:18.920146  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:18.920154  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:18.920160  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:18.920165  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:18.920171  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:18.920190  170754 retry.go:31] will retry after 347.817824ms: missing components: kube-dns
	I1026 15:18:19.274785  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:19.274825  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.274835  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.274843  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:19.274852  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:19.274861  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:19.274867  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:19.274874  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:19.274898  170754 retry.go:31] will retry after 438.1694ms: missing components: kube-dns
	I1026 15:18:19.717863  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:19.717910  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.717924  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:19.717936  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:19.717948  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:19.717958  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:19.717969  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:19.717973  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:19.717995  170754 retry.go:31] will retry after 411.129085ms: missing components: kube-dns
	I1026 15:18:20.133294  170754 system_pods.go:86] 7 kube-system pods found
	I1026 15:18:20.133328  170754 system_pods.go:89] "coredns-66bc5c9577-5km5n" [da30f29b-ab29-4d65-ba42-0626bad52267] Running
	I1026 15:18:20.133336  170754 system_pods.go:89] "coredns-66bc5c9577-77frh" [af90376e-433e-4f19-b0c8-0ddf58a79b0b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:18:20.133341  170754 system_pods.go:89] "etcd-pause-750553" [b108b19d-4036-4cd5-8681-f0d2262a3c5c] Running
	I1026 15:18:20.133348  170754 system_pods.go:89] "kube-apiserver-pause-750553" [dd5a0e81-80f5-4979-a26e-3d628737b8b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:18:20.133354  170754 system_pods.go:89] "kube-controller-manager-pause-750553" [d1922dca-907b-4987-a109-d9076b60a615] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:18:20.133359  170754 system_pods.go:89] "kube-proxy-5bgtf" [c84300cc-7cc1-4b0d-83e7-052a94f0c7ab] Running
	I1026 15:18:20.133363  170754 system_pods.go:89] "kube-scheduler-pause-750553" [c88ec255-28ce-4764-b0b2-ba5236312c0f] Running
	I1026 15:18:20.133374  170754 system_pods.go:126] duration metric: took 1.426195569s to wait for k8s-apps to be running ...
	I1026 15:18:20.133381  170754 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:18:20.133428  170754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:18:20.151662  170754 system_svc.go:56] duration metric: took 18.267544ms WaitForService to wait for kubelet
	I1026 15:18:20.151702  170754 kubeadm.go:586] duration metric: took 3.071293423s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:18:20.151725  170754 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:18:20.155023  170754 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:18:20.155069  170754 node_conditions.go:123] node cpu capacity is 2
	I1026 15:18:20.155089  170754 node_conditions.go:105] duration metric: took 3.356426ms to run NodePressure ...
	I1026 15:18:20.155106  170754 start.go:241] waiting for startup goroutines ...
	I1026 15:18:20.155122  170754 start.go:246] waiting for cluster config update ...
	I1026 15:18:20.155134  170754 start.go:255] writing updated cluster config ...
	I1026 15:18:20.155530  170754 ssh_runner.go:195] Run: rm -f paused
	I1026 15:18:20.160694  170754 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:20.161558  170754 kapi.go:59] client config for pause-750553: &rest.Config{Host:"https://192.168.72.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/profiles/pause-750553/client.key", CAFile:"/home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[
]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c6a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 15:18:20.164103  170754 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5km5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.168561  170754 pod_ready.go:94] pod "coredns-66bc5c9577-5km5n" is "Ready"
	I1026 15:18:20.168577  170754 pod_ready.go:86] duration metric: took 4.454081ms for pod "coredns-66bc5c9577-5km5n" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:20.168584  170754 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-77frh" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:18:22.175599  170754 pod_ready.go:104] pod "coredns-66bc5c9577-77frh" is not "Ready", error: <nil>
	I1026 15:18:21.357857  178853 out.go:252]   - Booting up control plane ...
	I1026 15:18:21.357974  178853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 15:18:21.360202  178853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 15:18:21.361790  178853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 15:18:21.379952  178853 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 15:18:21.380129  178853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 15:18:21.387763  178853 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 15:18:21.388240  178853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 15:18:21.388337  178853 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 15:18:21.560942  178853 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 15:18:21.561078  178853 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 15:18:22.562531  178853 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001586685s
	I1026 15:18:22.565505  178853 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 15:18:22.565613  178853 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.39.103:8443/livez
	I1026 15:18:22.565724  178853 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 15:18:22.565818  178853 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 15:18:25.181252  178853 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.617527962s
	W1026 15:18:25.023498  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	W1026 15:18:27.023808  177820 pod_ready.go:104] pod "coredns-66bc5c9577-sqsf7" is not "Ready", error: <nil>
	I1026 15:18:26.840883  178853 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.278540407s
	I1026 15:18:28.066772  178853 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.50503235s
	I1026 15:18:28.083331  178853 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 15:18:28.102934  178853 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 15:18:28.120717  178853 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 15:18:28.120995  178853 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-163393 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 15:18:28.139172  178853 kubeadm.go:318] [bootstrap-token] Using token: uv77ly.o2qdsd5r72jmaiwn
	W1026 15:18:24.675876  170754 pod_ready.go:104] pod "coredns-66bc5c9577-77frh" is not "Ready", error: <nil>
	W1026 15:18:27.176609  170754 pod_ready.go:104] pod "coredns-66bc5c9577-77frh" is not "Ready", error: <nil>
	I1026 15:18:28.174961  170754 pod_ready.go:94] pod "coredns-66bc5c9577-77frh" is "Ready"
	I1026 15:18:28.175002  170754 pod_ready.go:86] duration metric: took 8.006410722s for pod "coredns-66bc5c9577-77frh" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.177658  170754 pod_ready.go:83] waiting for pod "etcd-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.183452  170754 pod_ready.go:94] pod "etcd-pause-750553" is "Ready"
	I1026 15:18:28.183508  170754 pod_ready.go:86] duration metric: took 5.819403ms for pod "etcd-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.185497  170754 pod_ready.go:83] waiting for pod "kube-apiserver-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.189812  170754 pod_ready.go:94] pod "kube-apiserver-pause-750553" is "Ready"
	I1026 15:18:28.189832  170754 pod_ready.go:86] duration metric: took 4.312219ms for pod "kube-apiserver-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.192873  170754 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.140488  178853 out.go:252]   - Configuring RBAC rules ...
	I1026 15:18:28.140649  178853 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 15:18:28.156992  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 15:18:28.175040  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 15:18:28.179874  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 15:18:28.184364  178853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 15:18:28.188857  178853 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 15:18:28.475121  178853 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 15:18:28.955825  178853 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 15:18:29.474209  178853 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 15:18:29.475071  178853 kubeadm.go:318] 
	I1026 15:18:29.475158  178853 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 15:18:29.475199  178853 kubeadm.go:318] 
	I1026 15:18:29.475323  178853 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 15:18:29.475334  178853 kubeadm.go:318] 
	I1026 15:18:29.475371  178853 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 15:18:29.475489  178853 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 15:18:29.475577  178853 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 15:18:29.475587  178853 kubeadm.go:318] 
	I1026 15:18:29.475672  178853 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 15:18:29.475681  178853 kubeadm.go:318] 
	I1026 15:18:29.475772  178853 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 15:18:29.475793  178853 kubeadm.go:318] 
	I1026 15:18:29.475911  178853 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 15:18:29.476047  178853 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 15:18:29.476168  178853 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 15:18:29.476180  178853 kubeadm.go:318] 
	I1026 15:18:29.476305  178853 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 15:18:29.476412  178853 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 15:18:29.476421  178853 kubeadm.go:318] 
	I1026 15:18:29.476548  178853 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token uv77ly.o2qdsd5r72jmaiwn \
	I1026 15:18:29.476713  178853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be \
	I1026 15:18:29.476749  178853 kubeadm.go:318] 	--control-plane 
	I1026 15:18:29.476758  178853 kubeadm.go:318] 
	I1026 15:18:29.476866  178853 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 15:18:29.476877  178853 kubeadm.go:318] 
	I1026 15:18:29.476976  178853 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token uv77ly.o2qdsd5r72jmaiwn \
	I1026 15:18:29.477165  178853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3ad055a424ab8eb6b83482448af651001c6d6c03abf832b7f498f66a21acb6be 
	I1026 15:18:29.477909  178853 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 15:18:29.477943  178853 cni.go:84] Creating CNI manager for ""
	I1026 15:18:29.477957  178853 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:18:29.479970  178853 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:18:28.374112  170754 pod_ready.go:94] pod "kube-controller-manager-pause-750553" is "Ready"
	I1026 15:18:28.374139  170754 pod_ready.go:86] duration metric: took 181.243358ms for pod "kube-controller-manager-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.574273  170754 pod_ready.go:83] waiting for pod "kube-proxy-5bgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:28.974129  170754 pod_ready.go:94] pod "kube-proxy-5bgtf" is "Ready"
	I1026 15:18:28.974172  170754 pod_ready.go:86] duration metric: took 399.869701ms for pod "kube-proxy-5bgtf" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:29.174265  170754 pod_ready.go:83] waiting for pod "kube-scheduler-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:29.572768  170754 pod_ready.go:94] pod "kube-scheduler-pause-750553" is "Ready"
	I1026 15:18:29.572795  170754 pod_ready.go:86] duration metric: took 398.503317ms for pod "kube-scheduler-pause-750553" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:18:29.572809  170754 pod_ready.go:40] duration metric: took 9.412085156s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:18:29.629035  170754 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:18:29.631724  170754 out.go:179] * Done! kubectl is now configured to use "pause-750553" cluster and "default" namespace by default
	I1026 15:18:29.480921  178853 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:18:29.495876  178853 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:18:29.520334  178853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:18:29.520441  178853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:29.520518  178853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-163393 minikube.k8s.io/updated_at=2025_10_26T15_18_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46 minikube.k8s.io/name=embed-certs-163393 minikube.k8s.io/primary=true
	I1026 15:18:29.563775  178853 ops.go:34] apiserver oom_adj: -16
	I1026 15:18:29.642145  178853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:30.143082  178853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:30.642417  178853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 15:18:31.143173  178853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.013530627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491912013494914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c79015df-48b4-4677-ba23-522e8602512d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.015362788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b5ca69b-f548-4d48-922b-8f29096f529c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.015582137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b5ca69b-f548-4d48-922b-8f29096f529c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.015841071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b5ca69b-f548-4d48-922b-8f29096f529c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.059487064Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d65fbd54-9974-4059-95f6-be8df2bd6270 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.059669572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d65fbd54-9974-4059-95f6-be8df2bd6270 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.061440359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4dd3af92-a43c-4ab1-8c49-ff11762bbb9b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.061975713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491912061952075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4dd3af92-a43c-4ab1-8c49-ff11762bbb9b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.062501708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01ef064e-2f0a-4973-941a-17175a82a681 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.062614304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01ef064e-2f0a-4973-941a-17175a82a681 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.063135925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01ef064e-2f0a-4973-941a-17175a82a681 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.105287097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42f7ace2-6c32-4435-b70d-69323bcbf324 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.105374692Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42f7ace2-6c32-4435-b70d-69323bcbf324 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.106809586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b06b970-4442-4437-8c17-40878b18a1e9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.107297942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491912107275059,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b06b970-4442-4437-8c17-40878b18a1e9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.107852672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ae9cef6-8b3c-4902-842a-33b3fa684be5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.107928988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ae9cef6-8b3c-4902-842a-33b3fa684be5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.108282452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ae9cef6-8b3c-4902-842a-33b3fa684be5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.157362084Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d3fafd9e-5820-4b70-a3ae-00ac8ea56299 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.157437915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d3fafd9e-5820-4b70-a3ae-00ac8ea56299 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.159742339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a59dcc5-47fb-42c4-8fcb-56e13ee50b7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.160487762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761491912160451497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a59dcc5-47fb-42c4-8fcb-56e13ee50b7d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.161310723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25c68ae7-3b56-4847-ae5b-e3a06b008a67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.161401072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25c68ae7-3b56-4847-ae5b-e3a06b008a67 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:18:32 pause-750553 crio[3324]: time="2025-10-26 15:18:32.161644564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64,PodSandboxId:94ed34380cfcf4cd73383420814845da94f014f3ba0b6c09814ee19fe6f672f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899288067036,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-77frh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af90376e-433e-4f19-b0c8-0ddf58a79b0b,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86,PodSandboxId:72f7e3e974ada8cd01f30b7d152d3b86f57bb2afb5696d4a056c343322181b8d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761491899267648352,Labels:map[string]stri
ng{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5km5n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da30f29b-ab29-4d65-ba42-0626bad52267,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e,PodSandboxId:0fb764a83b14ea8704a84aad67ea34c00d725bad24bbaf5c1577d01a6300b6b1,Metadata:&Containe
rMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761491897825343884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5bgtf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c84300cc-7cc1-4b0d-83e7-052a94f0c7ab,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3,PodSandboxId:44b7aa6a20a66d9c2d746eaee8c6b5310f84779a6c17b0c0a1e8b5a2730aa5f8,Metadata:&ContainerMetadata{Name:kube-controller-ma
nager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761491885346208328,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdff4b99713a5dca7c65f03b35941135,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b81887
44,PodSandboxId:5f708a2c2e9f6b0cea87f1df2cdfbe287122a9c151a57d632e798acef445d3a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761491885341513589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 354c919aa8057eb2212dd92b7f739c9e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5,PodSandboxId:d73d285385390edc7f994b50018eb219f96cf2788b24eb770f05b3e07b0e2ded,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761491885298487671,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8db747c6973e70300bcb02a4b50ac30,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io
.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec,PodSandboxId:8862f3b75169e70a09160cc029cfcbfe98cf85f45288264acfcbae6e973a3e20,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761491885287823817,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-750553,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b6af0b874a7a82d2f4d0e4e41f269fc,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=25c68ae7-3b56-4847-ae5b-e3a06b008a67 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6000f3862c3bb       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   94ed34380cfcf       coredns-66bc5c9577-77frh
	877e736c8a771       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   72f7e3e974ada       coredns-66bc5c9577-5km5n
	93975cb982e3c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   14 seconds ago      Running             kube-proxy                0                   0fb764a83b14e       kube-proxy-5bgtf
	4b42948e1829e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   26 seconds ago      Running             kube-controller-manager   1                   44b7aa6a20a66       kube-controller-manager-pause-750553
	06f4c2a4038ff       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   26 seconds ago      Running             etcd                      3                   5f708a2c2e9f6       etcd-pause-750553
	9e1e8ba0c0240       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   26 seconds ago      Running             kube-scheduler            3                   d73d285385390       kube-scheduler-pause-750553
	9be07d11240a8       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   26 seconds ago      Running             kube-apiserver            1                   8862f3b75169e       kube-apiserver-pause-750553
	
	
	==> coredns [6000f3862c3bb59168eb5789b29855f6fe826c24b55b140532012346a3664e64] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [877e736c8a7717179c2a1b8478ae3d3f10083854c127b1e3fbc1d0eea61bfb86] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> describe nodes <==
	Name:               pause-750553
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-750553
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=pause-750553
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-750553
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:18:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:18:21 +0000   Sun, 26 Oct 2025 15:18:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.175
	  Hostname:    pause-750553
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 ded7bfe485724686a7a119dd93a16d6b
	  System UUID:                ded7bfe4-8572-4686-a7a1-19dd93a16d6b
	  Boot ID:                    5113f6ec-d58a-4acb-8b90-586e2ab854c9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-5km5n                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     15s
	  kube-system                 coredns-66bc5c9577-77frh                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     15s
	  kube-system                 etcd-pause-750553                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         22s
	  kube-system                 kube-apiserver-pause-750553             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-controller-manager-pause-750553    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-proxy-5bgtf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-scheduler-pause-750553             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             240Mi (8%)  340Mi (11%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13s   kube-proxy       
	  Normal  Starting                 22s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22s   kubelet          Node pause-750553 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s   kubelet          Node pause-750553 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s   kubelet          Node pause-750553 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17s   node-controller  Node pause-750553 event: Registered Node pause-750553 in Controller
	
	
	==> dmesg <==
	[Oct26 15:11] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000076] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007466] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.163488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.089232] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.108001] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.135228] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.255252] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.670777] kauditd_printk_skb: 222 callbacks suppressed
	[Oct26 15:12] kauditd_printk_skb: 38 callbacks suppressed
	[Oct26 15:13] kauditd_printk_skb: 247 callbacks suppressed
	[Oct26 15:17] kauditd_printk_skb: 124 callbacks suppressed
	[Oct26 15:18] kauditd_printk_skb: 46 callbacks suppressed
	[  +0.150519] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.924778] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.796551] kauditd_printk_skb: 140 callbacks suppressed
	
	
	==> etcd [06f4c2a4038ffdca314bf94eef82912d08718933a8cae1d63fbe6923b8188744] <==
	{"level":"info","ts":"2025-10-26T15:18:18.197468Z","caller":"traceutil/trace.go:172","msg":"trace[1134370359] transaction","detail":"{read_only:false; response_revision:328; number_of_response:1; }","duration":"1.098385485s","start":"2025-10-26T15:18:17.099049Z","end":"2025-10-26T15:18:18.197435Z","steps":["trace[1134370359] 'process raft request'  (duration: 547.078131ms)","trace[1134370359] 'compare'  (duration: 548.93273ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:18:18.198201Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.099024Z","time spent":"1.099115874s","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":762,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-5bgtf.1872138c311adbb3\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-5bgtf.1872138c311adbb3\" value_size:682 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.199494Z","caller":"traceutil/trace.go:172","msg":"trace[15442059] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"900.125476ms","start":"2025-10-26T15:18:17.299354Z","end":"2025-10-26T15:18:18.199480Z","steps":["trace[15442059] 'process raft request'  (duration: 896.608019ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.199832Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.299311Z","time spent":"900.29979ms","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":704,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c3d05a1bb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c3d05a1bb\" value_size:622 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.200743Z","caller":"traceutil/trace.go:172","msg":"trace[1637677112] transaction","detail":"{read_only:false; response_revision:330; number_of_response:1; }","duration":"898.044101ms","start":"2025-10-26T15:18:17.302326Z","end":"2025-10-26T15:18:18.200370Z","steps":["trace[1637677112] 'process raft request'  (duration: 893.686293ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.200846Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.302311Z","time spent":"898.471763ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3812,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" value_size:3753 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.200938Z","caller":"traceutil/trace.go:172","msg":"trace[920803324] transaction","detail":"{read_only:false; response_revision:331; number_of_response:1; }","duration":"898.411822ms","start":"2025-10-26T15:18:17.302510Z","end":"2025-10-26T15:18:18.200921Z","steps":["trace[920803324] 'process raft request'  (duration: 893.538782ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.204474Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:17.302502Z","time spent":"898.45714ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3864,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-5km5n\" mod_revision:327 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-5km5n\" value_size:3805 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-5km5n\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.654437Z","caller":"traceutil/trace.go:172","msg":"trace[1244551045] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"440.187335ms","start":"2025-10-26T15:18:18.214208Z","end":"2025-10-26T15:18:18.654395Z","steps":["trace[1244551045] 'process raft request'  (duration: 439.953323ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.654626Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.214190Z","time spent":"440.308132ms","remote":"127.0.0.1:48992","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4041,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:294 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:3981 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.658045Z","caller":"traceutil/trace.go:172","msg":"trace[1379694249] linearizableReadLoop","detail":"{readStateIndex:341; appliedIndex:341; }","duration":"416.294224ms","start":"2025-10-26T15:18:18.237556Z","end":"2025-10-26T15:18:18.653850Z","steps":["trace[1379694249] 'read index received'  (duration: 416.287897ms)","trace[1379694249] 'applied index is now lower than readState.Index'  (duration: 5.11µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:18:18.659724Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"422.176747ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T15:18:18.659832Z","caller":"traceutil/trace.go:172","msg":"trace[890621526] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:332; }","duration":"422.293886ms","start":"2025-10-26T15:18:18.237524Z","end":"2025-10-26T15:18:18.659818Z","steps":["trace[890621526] 'agreement among raft nodes before linearized reading'  (duration: 422.095836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.659875Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.237510Z","time spent":"422.350905ms","remote":"127.0.0.1:47984","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-10-26T15:18:18.661379Z","caller":"traceutil/trace.go:172","msg":"trace[552245983] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"444.454992ms","start":"2025-10-26T15:18:18.216905Z","end":"2025-10-26T15:18:18.661360Z","steps":["trace[552245983] 'process raft request'  (duration: 444.230925ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.661781Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.216887Z","time spent":"444.651428ms","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":704,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c73139c4f\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577.1872138c73139c4f\" value_size:622 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.662617Z","caller":"traceutil/trace.go:172","msg":"trace[143703881] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"116.569075ms","start":"2025-10-26T15:18:18.545908Z","end":"2025-10-26T15:18:18.662477Z","steps":["trace[143703881] 'process raft request'  (duration: 116.521675ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:18:18.662944Z","caller":"traceutil/trace.go:172","msg":"trace[1415387288] transaction","detail":"{read_only:false; response_revision:334; number_of_response:1; }","duration":"443.743937ms","start":"2025-10-26T15:18:18.219189Z","end":"2025-10-26T15:18:18.662932Z","steps":["trace[1415387288] 'process raft request'  (duration: 442.057429ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.663032Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.219175Z","time spent":"443.816235ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3864,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" mod_revision:330 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" value_size:3805 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-77frh\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.663301Z","caller":"traceutil/trace.go:172","msg":"trace[314646202] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"436.215141ms","start":"2025-10-26T15:18:18.227072Z","end":"2025-10-26T15:18:18.663288Z","steps":["trace[314646202] 'process raft request'  (duration: 435.226263ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.663470Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.227055Z","time spent":"436.276909ms","remote":"127.0.0.1:48476","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-b47hz2nhtkyt3kispd6ru45xuq\" mod_revision:19 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-b47hz2nhtkyt3kispd6ru45xuq\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-b47hz2nhtkyt3kispd6ru45xuq\" > >"}
	{"level":"info","ts":"2025-10-26T15:18:18.664041Z","caller":"traceutil/trace.go:172","msg":"trace[296606446] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"442.818054ms","start":"2025-10-26T15:18:18.221212Z","end":"2025-10-26T15:18:18.664030Z","steps":["trace[296606446] 'process raft request'  (duration: 440.084665ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.664212Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.221201Z","time spent":"442.979759ms","remote":"127.0.0.1:48098","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":723,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-5km5n.1872138c73cfc66b\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-5km5n.1872138c73cfc66b\" value_size:635 lease:8088915872680160577 >> failure:<>"}
	{"level":"info","ts":"2025-10-26T15:18:18.665274Z","caller":"traceutil/trace.go:172","msg":"trace[946875473] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"435.376136ms","start":"2025-10-26T15:18:18.229887Z","end":"2025-10-26T15:18:18.665263Z","steps":["trace[946875473] 'process raft request'  (duration: 432.506355ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:18:18.665521Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:18:18.229873Z","time spent":"435.528774ms","remote":"127.0.0.1:48330","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5955,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-750553\" mod_revision:266 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-750553\" value_size:5903 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-750553\" > >"}
	
	
	==> kernel <==
	 15:18:32 up 7 min,  0 users,  load average: 0.99, 0.51, 0.24
	Linux pause-750553 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9be07d11240a80f1c05c43acb18334335ac1ad7b6ff2cb2952cc120638c677ec] <==
	I1026 15:18:08.142353       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 15:18:08.142376       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 15:18:08.142392       1 cache.go:39] Caches are synced for autoregister controller
	I1026 15:18:08.166442       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:08.168606       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 15:18:08.189043       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 15:18:08.196211       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:08.200729       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 15:18:08.902544       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 15:18:08.911206       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 15:18:08.911241       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 15:18:09.522003       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 15:18:09.569718       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 15:18:09.720049       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 15:18:09.731827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.175]
	I1026 15:18:09.733561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 15:18:09.739716       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 15:18:10.447955       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 15:18:10.813642       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 15:18:10.846556       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 15:18:10.860225       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 15:18:15.836386       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 15:18:16.137023       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 15:18:16.501275       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 15:18:16.506435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4b42948e1829ec46e8761bc5ed39e7079218fdceee31fdb5333c6eb75bcfc6a3] <==
	I1026 15:18:15.433786       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 15:18:15.434291       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 15:18:15.434351       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 15:18:15.434812       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 15:18:15.436171       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 15:18:15.436189       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 15:18:15.436238       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 15:18:15.437444       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 15:18:15.437492       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 15:18:15.437540       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 15:18:15.437572       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 15:18:15.437619       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 15:18:15.438960       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 15:18:15.439073       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 15:18:15.439133       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 15:18:15.439139       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 15:18:15.439143       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 15:18:15.442387       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:18:15.444823       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 15:18:15.452816       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-750553" podCIDRs=["10.244.0.0/24"]
	I1026 15:18:15.453891       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 15:18:15.457160       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 15:18:15.458429       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 15:18:15.472951       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 15:18:15.474190       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [93975cb982e3c43d536d8251d5e9e4e136461cdd62deed78f47cec56d90e8d8e] <==
	I1026 15:18:18.640259       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:18:18.740718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:18:18.740744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.175"]
	E1026 15:18:18.740802       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:18:18.809822       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 15:18:18.809952       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:18:18.809980       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:18:18.819475       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:18:18.819753       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:18:18.819780       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:18:18.825798       1 config.go:200] "Starting service config controller"
	I1026 15:18:18.825829       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:18:18.825864       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:18:18.825867       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:18:18.825877       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:18:18.825880       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:18:18.832865       1 config.go:309] "Starting node config controller"
	I1026 15:18:18.832896       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:18:18.832902       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:18:18.926283       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:18:18.926454       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:18:18.926471       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9e1e8ba0c02401d2b683f372fc58cc11cdf5c439bd9edff51b9c110ece60aaf5] <==
	E1026 15:18:08.159700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:18:08.159758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:18:08.159813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:18:08.160069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:18:08.164559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:18:08.167060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:18:08.167210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 15:18:08.168576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:18:08.171372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:18:08.174137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:18:08.175261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:18:08.175323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:18:08.175474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:18:08.175504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:18:08.175555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:18:08.180191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:18:08.180193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:18:09.038163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:18:09.048501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:18:09.059588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:18:09.098306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:18:09.116035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:18:09.199502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:18:09.257997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1026 15:18:09.832655       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:18:11 pause-750553 kubelet[10545]: E1026 15:18:11.861021   10545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-750553\" already exists" pod="kube-system/kube-apiserver-pause-750553"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: E1026 15:18:11.861646   10545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-750553\" already exists" pod="kube-system/etcd-pause-750553"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: E1026 15:18:11.861763   10545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-750553\" already exists" pod="kube-system/kube-scheduler-pause-750553"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: I1026 15:18:11.894886   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-750553" podStartSLOduration=1.894869929 podStartE2EDuration="1.894869929s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.882070242 +0000 UTC m=+1.237954049" watchObservedRunningTime="2025-10-26 15:18:11.894869929 +0000 UTC m=+1.250753733"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: I1026 15:18:11.910717   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-750553" podStartSLOduration=1.910702003 podStartE2EDuration="1.910702003s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.896173355 +0000 UTC m=+1.252057163" watchObservedRunningTime="2025-10-26 15:18:11.910702003 +0000 UTC m=+1.266585844"
	Oct 26 15:18:11 pause-750553 kubelet[10545]: I1026 15:18:11.925450   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-750553" podStartSLOduration=1.925426055 podStartE2EDuration="1.925426055s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.911561552 +0000 UTC m=+1.267445365" watchObservedRunningTime="2025-10-26 15:18:11.925426055 +0000 UTC m=+1.281309864"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.171669   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-750553" podStartSLOduration=6.17163011 podStartE2EDuration="6.17163011s" podCreationTimestamp="2025-10-26 15:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:11.926478978 +0000 UTC m=+1.282362767" watchObservedRunningTime="2025-10-26 15:18:16.17163011 +0000 UTC m=+5.527513944"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197132   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-xtables-lock\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197168   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-kube-proxy\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197183   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htmpg\" (UniqueName: \"kubernetes.io/projected/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-kube-api-access-htmpg\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:16 pause-750553 kubelet[10545]: I1026 15:18:16.197202   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c84300cc-7cc1-4b0d-83e7-052a94f0c7ab-lib-modules\") pod \"kube-proxy-5bgtf\" (UID: \"c84300cc-7cc1-4b0d-83e7-052a94f0c7ab\") " pod="kube-system/kube-proxy-5bgtf"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.718759   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/da30f29b-ab29-4d65-ba42-0626bad52267-config-volume\") pod \"coredns-66bc5c9577-5km5n\" (UID: \"da30f29b-ab29-4d65-ba42-0626bad52267\") " pod="kube-system/coredns-66bc5c9577-5km5n"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.719482   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6fm8\" (UniqueName: \"kubernetes.io/projected/af90376e-433e-4f19-b0c8-0ddf58a79b0b-kube-api-access-s6fm8\") pod \"coredns-66bc5c9577-77frh\" (UID: \"af90376e-433e-4f19-b0c8-0ddf58a79b0b\") " pod="kube-system/coredns-66bc5c9577-77frh"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.719681   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstb8\" (UniqueName: \"kubernetes.io/projected/da30f29b-ab29-4d65-ba42-0626bad52267-kube-api-access-mstb8\") pod \"coredns-66bc5c9577-5km5n\" (UID: \"da30f29b-ab29-4d65-ba42-0626bad52267\") " pod="kube-system/coredns-66bc5c9577-5km5n"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.719715   10545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af90376e-433e-4f19-b0c8-0ddf58a79b0b-config-volume\") pod \"coredns-66bc5c9577-77frh\" (UID: \"af90376e-433e-4f19-b0c8-0ddf58a79b0b\") " pod="kube-system/coredns-66bc5c9577-77frh"
	Oct 26 15:18:18 pause-750553 kubelet[10545]: I1026 15:18:18.893059   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5bgtf" podStartSLOduration=2.8929712800000003 podStartE2EDuration="2.89297128s" podCreationTimestamp="2025-10-26 15:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:18.892773818 +0000 UTC m=+8.248657604" watchObservedRunningTime="2025-10-26 15:18:18.89297128 +0000 UTC m=+8.248855086"
	Oct 26 15:18:19 pause-750553 kubelet[10545]: I1026 15:18:19.902377   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-77frh" podStartSLOduration=2.9023631659999998 podStartE2EDuration="2.902363166s" podCreationTimestamp="2025-10-26 15:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:19.902330952 +0000 UTC m=+9.258214759" watchObservedRunningTime="2025-10-26 15:18:19.902363166 +0000 UTC m=+9.258246972"
	Oct 26 15:18:20 pause-750553 kubelet[10545]: E1026 15:18:20.879893   10545 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761491900879413193  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 26 15:18:20 pause-750553 kubelet[10545]: E1026 15:18:20.879955   10545 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761491900879413193  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 26 15:18:21 pause-750553 kubelet[10545]: I1026 15:18:21.229256   10545 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 15:18:21 pause-750553 kubelet[10545]: I1026 15:18:21.230207   10545 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 15:18:22 pause-750553 kubelet[10545]: I1026 15:18:22.301329   10545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5km5n" podStartSLOduration=5.301313806 podStartE2EDuration="5.301313806s" podCreationTimestamp="2025-10-26 15:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 15:18:19.92125913 +0000 UTC m=+9.277142920" watchObservedRunningTime="2025-10-26 15:18:22.301313806 +0000 UTC m=+11.657197612"
	Oct 26 15:18:27 pause-750553 kubelet[10545]: I1026 15:18:27.806693   10545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 15:18:30 pause-750553 kubelet[10545]: E1026 15:18:30.881436   10545 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761491910880904314  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Oct 26 15:18:30 pause-750553 kubelet[10545]: E1026 15:18:30.881454   10545 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761491910880904314  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-750553 -n pause-750553
helpers_test.go:269: (dbg) Run:  kubectl --context pause-750553 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (379.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nxc8p" [ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 15:21:38.448651  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-26 15:30:36.873325153 +0000 UTC m=+4543.466756595
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-163393 describe po kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-163393 describe po kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-nxc8p
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-163393/192.168.39.103
Start Time:       Sun, 26 Oct 2025 15:21:25 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7g7gr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-7g7gr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m11s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p to embed-certs-163393
Warning  Failed     8m34s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    4m22s (x5 over 9m10s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m50s (x5 over 8m34s)   kubelet            Error: ErrImagePull
Warning  Failed     3m50s (x4 over 7m51s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m17s (x16 over 8m34s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    75s (x21 over 8m34s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-163393 logs kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-163393 logs kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard: exit status 1 (82.717466ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-nxc8p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-163393 logs kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-163393 -n embed-certs-163393
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-163393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-163393 logs -n 25: (1.141863906s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ start   │ -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-163393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ image   │ old-k8s-version-065983 image list --format=json                                                                                                                                                                                             │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ unpause │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-705037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ no-preload-758002 image list --format=json                                                                                                                                                                                                  │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ pause   │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ unpause │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-574718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ stop    │ -p newest-cni-574718 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-574718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ newest-cni-574718 image list --format=json                                                                                                                                                                                                  │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ pause   │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ unpause │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:22:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:22:08.024156  182377 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:22:08.024392  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024406  182377 out.go:374] Setting ErrFile to fd 2...
	I1026 15:22:08.024410  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024606  182377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:22:08.025048  182377 out.go:368] Setting JSON to false
	I1026 15:22:08.025981  182377 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7462,"bootTime":1761484666,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:22:08.026077  182377 start.go:141] virtualization: kvm guest
	I1026 15:22:08.027688  182377 out.go:179] * [newest-cni-574718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:22:08.028960  182377 notify.go:220] Checking for updates...
	I1026 15:22:08.028993  182377 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:22:08.030046  182377 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:22:08.031185  182377 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:08.032356  182377 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:22:08.033461  182377 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:22:08.034474  182377 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:22:08.035832  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:08.036313  182377 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:22:08.072389  182377 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 15:22:08.073663  182377 start.go:305] selected driver: kvm2
	I1026 15:22:08.073682  182377 start.go:925] validating driver "kvm2" against &{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.073825  182377 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:22:08.075175  182377 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:08.075218  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:08.075284  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:08.075345  182377 start.go:349] cluster config:
	{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.075449  182377 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:22:08.077008  182377 out.go:179] * Starting "newest-cni-574718" primary control-plane node in "newest-cni-574718" cluster
	I1026 15:22:08.078030  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:08.078073  182377 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:22:08.078088  182377 cache.go:58] Caching tarball of preloaded images
	I1026 15:22:08.078221  182377 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:22:08.078236  182377 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:22:08.078334  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:08.078601  182377 start.go:360] acquireMachinesLock for newest-cni-574718: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:22:08.078675  182377 start.go:364] duration metric: took 45.376µs to acquireMachinesLock for "newest-cni-574718"
	I1026 15:22:08.078701  182377 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:22:08.078711  182377 fix.go:54] fixHost starting: 
	I1026 15:22:08.080626  182377 fix.go:112] recreateIfNeeded on newest-cni-574718: state=Stopped err=<nil>
	W1026 15:22:08.080669  182377 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:22:06.333558  181858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:06.357436  181858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360857  181858 node_ready.go:49] node "default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:06.360901  181858 node_ready.go:38] duration metric: took 3.362736ms for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360919  181858 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:06.360981  181858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:06.385860  181858 api_server.go:72] duration metric: took 266.62216ms to wait for apiserver process to appear ...
	I1026 15:22:06.385897  181858 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:06.385937  181858 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1026 15:22:06.392647  181858 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1026 15:22:06.393766  181858 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:06.393803  181858 api_server.go:131] duration metric: took 7.895398ms to wait for apiserver health ...
	I1026 15:22:06.393816  181858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:06.397637  181858 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:06.397674  181858 system_pods.go:61] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.397686  181858 system_pods.go:61] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.397698  181858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.397709  181858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.397718  181858 system_pods.go:61] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.397728  181858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.397746  181858 system_pods.go:61] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.397756  181858 system_pods.go:61] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.397766  181858 system_pods.go:74] duration metric: took 3.941599ms to wait for pod list to return data ...
	I1026 15:22:06.397779  181858 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:06.403865  181858 default_sa.go:45] found service account: "default"
	I1026 15:22:06.403888  181858 default_sa.go:55] duration metric: took 6.102699ms for default service account to be created ...
	I1026 15:22:06.403898  181858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:22:06.408267  181858 system_pods.go:86] 8 kube-system pods found
	I1026 15:22:06.408305  181858 system_pods.go:89] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.408318  181858 system_pods.go:89] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.408330  181858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.408339  181858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.408345  181858 system_pods.go:89] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.408354  181858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.408361  181858 system_pods.go:89] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.408373  181858 system_pods.go:89] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.408383  181858 system_pods.go:126] duration metric: took 4.477868ms to wait for k8s-apps to be running ...
	I1026 15:22:06.408393  181858 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:22:06.408450  181858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:22:06.432635  181858 system_svc.go:56] duration metric: took 24.227246ms WaitForService to wait for kubelet
	I1026 15:22:06.432676  181858 kubeadm.go:586] duration metric: took 313.448447ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:22:06.432702  181858 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:06.435956  181858 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:06.435988  181858 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:06.436002  181858 node_conditions.go:105] duration metric: took 3.294076ms to run NodePressure ...
	I1026 15:22:06.436018  181858 start.go:241] waiting for startup goroutines ...
	I1026 15:22:06.515065  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:06.572989  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:06.584697  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:06.584737  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:06.595077  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:06.595106  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:06.638704  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:06.638736  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:06.659544  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:06.659582  181858 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:06.702281  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.702320  181858 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:06.711972  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:06.712006  181858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:06.757866  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.788030  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:06.788064  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:06.847661  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:06.847708  181858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:06.929153  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:06.929177  181858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:06.986412  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:06.986448  181858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:07.045193  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:07.045218  181858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:07.093617  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:07.093654  181858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:07.162711  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:08.298101  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.725070201s)
	I1026 15:22:08.369209  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.61128174s)
	I1026 15:22:08.369257  181858 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-705037"
	I1026 15:22:08.605124  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.442357492s)
	I1026 15:22:08.606598  181858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-705037 addons enable metrics-server
	
	I1026 15:22:08.607892  181858 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:08.609005  181858 addons.go:514] duration metric: took 2.489743866s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:08.609043  181858 start.go:246] waiting for cluster config update ...
	I1026 15:22:08.609058  181858 start.go:255] writing updated cluster config ...
	I1026 15:22:08.609345  181858 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:08.616260  181858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:08.620760  181858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:10.628668  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:08.082049  182377 out.go:252] * Restarting existing kvm2 VM for "newest-cni-574718" ...
	I1026 15:22:08.082089  182377 main.go:141] libmachine: starting domain...
	I1026 15:22:08.082102  182377 main.go:141] libmachine: ensuring networks are active...
	I1026 15:22:08.083029  182377 main.go:141] libmachine: Ensuring network default is active
	I1026 15:22:08.083543  182377 main.go:141] libmachine: Ensuring network mk-newest-cni-574718 is active
	I1026 15:22:08.084108  182377 main.go:141] libmachine: getting domain XML...
	I1026 15:22:08.085257  182377 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-574718</name>
	  <uuid>3e8359f9-dc38-4472-b6d3-ffe603a5ee64</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/newest-cni-574718.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:b5:97'/>
	      <source network='mk-newest-cni-574718'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a1:2e:d8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:22:09.396910  182377 main.go:141] libmachine: waiting for domain to start...
	I1026 15:22:09.398416  182377 main.go:141] libmachine: domain is now running
	I1026 15:22:09.398445  182377 main.go:141] libmachine: waiting for IP...
	I1026 15:22:09.399448  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400230  182377 main.go:141] libmachine: domain newest-cni-574718 has current primary IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400244  182377 main.go:141] libmachine: found domain IP: 192.168.61.33
	I1026 15:22:09.400250  182377 main.go:141] libmachine: reserving static IP address...
	I1026 15:22:09.400772  182377 main.go:141] libmachine: found host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.400809  182377 main.go:141] libmachine: skip adding static IP to network mk-newest-cni-574718 - found existing host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"}
	I1026 15:22:09.400837  182377 main.go:141] libmachine: reserved static IP address 192.168.61.33 for domain newest-cni-574718
	I1026 15:22:09.400849  182377 main.go:141] libmachine: waiting for SSH...
	I1026 15:22:09.400857  182377 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:22:09.403391  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.403822  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.403850  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.404075  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:09.404289  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:09.404299  182377 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:22:12.493681  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	W1026 15:22:12.635327  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:14.627621  181858 pod_ready.go:94] pod "coredns-66bc5c9577-fs558" is "Ready"
	I1026 15:22:14.627655  181858 pod_ready.go:86] duration metric: took 6.00687198s for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.630599  181858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.634975  181858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:14.635007  181858 pod_ready.go:86] duration metric: took 4.382539ms for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.637185  181858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:16.644581  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	W1026 15:22:19.144809  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	I1026 15:22:20.143611  181858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.143640  181858 pod_ready.go:86] duration metric: took 5.506432171s for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.145536  181858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.149100  181858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.149131  181858 pod_ready.go:86] duration metric: took 3.572718ms for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.151047  181858 pod_ready.go:83] waiting for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.155496  181858 pod_ready.go:94] pod "kube-proxy-kr5kl" is "Ready"
	I1026 15:22:20.155521  181858 pod_ready.go:86] duration metric: took 4.452008ms for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.157137  181858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424601  181858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.424645  181858 pod_ready.go:86] duration metric: took 267.484691ms for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424664  181858 pod_ready.go:40] duration metric: took 11.808360636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:20.472398  181858 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:20.474272  181858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-705037" cluster and "default" namespace by default
	I1026 15:22:18.573877  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	I1026 15:22:21.678716  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:21.682223  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682617  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.682640  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682859  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:21.683068  182377 machine.go:93] provisionDockerMachine start ...
	I1026 15:22:21.685439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.685814  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.685841  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.686028  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.686280  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.686297  182377 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:22:21.789433  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:22:21.789491  182377 buildroot.go:166] provisioning hostname "newest-cni-574718"
	I1026 15:22:21.792404  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.792911  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.792937  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.793176  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.793395  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.793410  182377 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574718 && echo "newest-cni-574718" | sudo tee /etc/hostname
	I1026 15:22:21.914128  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574718
	
	I1026 15:22:21.917275  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917738  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.917764  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917937  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.918176  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.918200  182377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574718' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574718/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574718' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:22:22.026151  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:22.026183  182377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:22:22.026217  182377 buildroot.go:174] setting up certificates
	I1026 15:22:22.026229  182377 provision.go:84] configureAuth start
	I1026 15:22:22.029052  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.029554  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.029582  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.031873  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032223  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.032249  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032371  182377 provision.go:143] copyHostCerts
	I1026 15:22:22.032450  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:22:22.032491  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:22:22.032577  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:22:22.032704  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:22:22.032719  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:22:22.032762  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:22:22.032845  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:22:22.032855  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:22:22.032893  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:22:22.032958  182377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574718 san=[127.0.0.1 192.168.61.33 localhost minikube newest-cni-574718]
	I1026 15:22:22.469944  182377 provision.go:177] copyRemoteCerts
	I1026 15:22:22.470018  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:22:22.472561  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.472948  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.472970  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.473117  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.554777  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:22:22.582124  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:22:22.610149  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:22:22.638169  182377 provision.go:87] duration metric: took 611.92185ms to configureAuth
	I1026 15:22:22.638199  182377 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:22:22.638398  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:22.641177  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641627  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.641657  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641842  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:22.642047  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:22.642063  182377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:22:22.906384  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:22:22.906420  182377 machine.go:96] duration metric: took 1.223336761s to provisionDockerMachine
	I1026 15:22:22.906434  182377 start.go:293] postStartSetup for "newest-cni-574718" (driver="kvm2")
	I1026 15:22:22.906449  182377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:22:22.906556  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:22:22.909934  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910412  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.910439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910638  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.992977  182377 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:22:22.997825  182377 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:22:22.997860  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:22:22.997933  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:22:22.998039  182377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:22:22.998136  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:22:23.009341  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:23.040890  182377 start.go:296] duration metric: took 134.438124ms for postStartSetup
	I1026 15:22:23.040950  182377 fix.go:56] duration metric: took 14.962237903s for fixHost
	I1026 15:22:23.044164  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044594  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.044630  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044933  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:23.045233  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:23.045254  182377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:22:23.147520  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761492143.098139468
	
	I1026 15:22:23.147547  182377 fix.go:216] guest clock: 1761492143.098139468
	I1026 15:22:23.147556  182377 fix.go:229] Guest: 2025-10-26 15:22:23.098139468 +0000 UTC Remote: 2025-10-26 15:22:23.04095679 +0000 UTC m=+15.073904102 (delta=57.182678ms)
	I1026 15:22:23.147581  182377 fix.go:200] guest clock delta is within tolerance: 57.182678ms
	I1026 15:22:23.147589  182377 start.go:83] releasing machines lock for "newest-cni-574718", held for 15.068897915s
	I1026 15:22:23.150728  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151142  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.151167  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151719  182377 ssh_runner.go:195] Run: cat /version.json
	I1026 15:22:23.151804  182377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:22:23.155059  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155294  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155561  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155595  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155739  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.155910  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155945  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.156130  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.231442  182377 ssh_runner.go:195] Run: systemctl --version
	I1026 15:22:23.263168  182377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:22:23.405941  182377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:22:23.412607  182377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:22:23.412693  182377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:22:23.431222  182377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:22:23.431247  182377 start.go:495] detecting cgroup driver to use...
	I1026 15:22:23.431329  182377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:22:23.449871  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:22:23.466135  182377 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:22:23.466207  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:22:23.483845  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:22:23.499194  182377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:22:23.646146  182377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:22:23.864499  182377 docker.go:234] disabling docker service ...
	I1026 15:22:23.864576  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:22:23.882304  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:22:23.897571  182377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:22:24.064966  182377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:22:24.201804  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:22:24.216914  182377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:22:24.239366  182377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:22:24.239426  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.251236  182377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:22:24.251318  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.263630  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.275134  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.287125  182377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:22:24.302136  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.315011  182377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.335688  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.347573  182377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:22:24.358181  182377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:22:24.358260  182377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:22:24.379177  182377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:22:24.391253  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:24.532080  182377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:22:24.652383  182377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:22:24.652516  182377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:22:24.658249  182377 start.go:563] Will wait 60s for crictl version
	I1026 15:22:24.658308  182377 ssh_runner.go:195] Run: which crictl
	I1026 15:22:24.662623  182377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:22:24.701747  182377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:22:24.701833  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.730381  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.761145  182377 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:22:24.764994  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765410  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:24.765433  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765621  182377 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 15:22:24.770397  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:24.787194  182377 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:22:24.788437  182377 kubeadm.go:883] updating cluster {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:22:24.788570  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:24.788622  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:24.828217  182377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:22:24.828316  182377 ssh_runner.go:195] Run: which lz4
	I1026 15:22:24.833073  182377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:22:24.838213  182377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:22:24.838246  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 15:22:26.232172  182377 crio.go:462] duration metric: took 1.399140151s to copy over tarball
	I1026 15:22:26.232290  182377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:22:28.031969  182377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.79963377s)
	I1026 15:22:28.032009  182377 crio.go:469] duration metric: took 1.799794706s to extract the tarball
	I1026 15:22:28.032019  182377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:22:28.083266  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:28.129231  182377 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:22:28.129262  182377 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:22:28.129271  182377 kubeadm.go:934] updating node { 192.168.61.33 8443 v1.34.1 crio true true} ...
	I1026 15:22:28.129386  182377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574718 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:22:28.129473  182377 ssh_runner.go:195] Run: crio config
	I1026 15:22:28.175414  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:28.175448  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:28.175493  182377 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:22:28.175532  182377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574718 NodeName:newest-cni-574718 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:22:28.175679  182377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574718"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:22:28.175746  182377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:22:28.189114  182377 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:22:28.189184  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:22:28.201285  182377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1026 15:22:28.222167  182377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:22:28.241882  182377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:22:28.262267  182377 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1026 15:22:28.266495  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:28.281183  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:28.445545  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:28.481631  182377 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718 for IP: 192.168.61.33
	I1026 15:22:28.481655  182377 certs.go:195] generating shared ca certs ...
	I1026 15:22:28.481672  182377 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:28.481853  182377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:22:28.481904  182377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:22:28.481916  182377 certs.go:257] generating profile certs ...
	I1026 15:22:28.482010  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/client.key
	I1026 15:22:28.482074  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key.59f77b64
	I1026 15:22:28.482115  182377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key
	I1026 15:22:28.482217  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:22:28.482254  182377 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:22:28.482262  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:22:28.482285  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:22:28.482316  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:22:28.482340  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:22:28.482379  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:28.483044  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:22:28.517526  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:22:28.558414  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:22:28.586297  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:22:28.613805  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:22:28.642929  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:22:28.671810  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:22:28.700191  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:22:28.729422  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:22:28.756494  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:22:28.783988  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:22:28.812588  182377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:22:28.832551  182377 ssh_runner.go:195] Run: openssl version
	I1026 15:22:28.838355  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:22:28.850638  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855574  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855636  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.862555  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:22:28.874412  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:22:28.886395  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891025  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891082  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.897923  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:22:28.910115  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:22:28.922622  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927296  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927337  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.934138  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:22:28.945693  182377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:22:28.950557  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:22:28.957416  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:22:28.964523  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:22:28.971586  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:22:28.978762  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:22:28.986053  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:22:28.993134  182377 kubeadm.go:400] StartCluster: {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:28.993263  182377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:22:28.993323  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.032028  182377 cri.go:89] found id: ""
	I1026 15:22:29.032103  182377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:22:29.043952  182377 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:22:29.043972  182377 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:22:29.044040  182377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:22:29.056289  182377 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:22:29.057119  182377 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574718" does not appear in /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:29.057648  182377 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-137233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574718" cluster setting kubeconfig missing "newest-cni-574718" context setting]
	I1026 15:22:29.058341  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:29.060135  182377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:22:29.070432  182377 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.61.33
	I1026 15:22:29.070477  182377 kubeadm.go:1160] stopping kube-system containers ...
	I1026 15:22:29.070498  182377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 15:22:29.070565  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.108499  182377 cri.go:89] found id: ""
	I1026 15:22:29.108625  182377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 15:22:29.128646  182377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:22:29.140200  182377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:22:29.140217  182377 kubeadm.go:157] found existing configuration files:
	
	I1026 15:22:29.140259  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:22:29.150547  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:22:29.150618  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:22:29.161551  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:22:29.171576  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:22:29.171637  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:22:29.182113  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.191928  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:22:29.191975  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.202335  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:22:29.212043  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:22:29.212089  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:22:29.222315  182377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:22:29.232961  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:29.285078  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:30.940058  182377 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.654938215s)
	I1026 15:22:30.940132  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.190262  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.246873  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.330409  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:31.330532  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:31.830602  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.330655  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.830666  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.330601  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.376334  182377 api_server.go:72] duration metric: took 2.045939712s to wait for apiserver process to appear ...
	I1026 15:22:33.376368  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:33.376393  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:33.377001  182377 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": dial tcp 192.168.61.33:8443: connect: connection refused
	I1026 15:22:33.876665  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.154624  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.154676  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.154695  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.184996  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.185030  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.377426  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.382349  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.382371  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:36.876548  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.881970  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.882006  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.376698  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.384123  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:37.384156  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.876774  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.882031  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:37.891824  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:37.891850  182377 api_server.go:131] duration metric: took 4.515475379s to wait for apiserver health ...
	I1026 15:22:37.891861  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:37.891868  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:37.893513  182377 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:22:37.894739  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:22:37.909012  182377 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:22:37.935970  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:37.941779  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:37.941822  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:37.941834  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:37.941848  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:37.941862  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:37.941873  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:22:37.941878  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:37.941884  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:37.941889  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:22:37.941897  182377 system_pods.go:74] duration metric: took 5.899576ms to wait for pod list to return data ...
	I1026 15:22:37.941906  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:37.946827  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:37.946868  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:37.946885  182377 node_conditions.go:105] duration metric: took 4.973356ms to run NodePressure ...
	I1026 15:22:37.946955  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:38.207008  182377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:22:38.236075  182377 ops.go:34] apiserver oom_adj: -16
	I1026 15:22:38.236107  182377 kubeadm.go:601] duration metric: took 9.192128682s to restartPrimaryControlPlane
	I1026 15:22:38.236126  182377 kubeadm.go:402] duration metric: took 9.243002383s to StartCluster
	I1026 15:22:38.236154  182377 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.236270  182377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:38.238433  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.238827  182377 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:22:38.238959  182377 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:22:38.239088  182377 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574718"
	I1026 15:22:38.239110  182377 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574718"
	W1026 15:22:38.239120  182377 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:22:38.239127  182377 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574718"
	I1026 15:22:38.239155  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239168  182377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574718"
	I1026 15:22:38.239190  182377 addons.go:69] Setting dashboard=true in profile "newest-cni-574718"
	I1026 15:22:38.239234  182377 addons.go:238] Setting addon dashboard=true in "newest-cni-574718"
	W1026 15:22:38.239252  182377 addons.go:247] addon dashboard should already be in state true
	I1026 15:22:38.239176  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:38.239296  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239172  182377 addons.go:69] Setting metrics-server=true in profile "newest-cni-574718"
	I1026 15:22:38.239373  182377 addons.go:238] Setting addon metrics-server=true in "newest-cni-574718"
	W1026 15:22:38.239384  182377 addons.go:247] addon metrics-server should already be in state true
	I1026 15:22:38.239411  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.240384  182377 out.go:179] * Verifying Kubernetes components...
	I1026 15:22:38.241817  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:38.243158  182377 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574718"
	W1026 15:22:38.243174  182377 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:22:38.243191  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.243431  182377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:22:38.243449  182377 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 15:22:38.243435  182377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:22:38.244547  182377 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.244562  182377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:22:38.244795  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 15:22:38.244828  182377 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 15:22:38.244850  182377 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.244868  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:22:38.245802  182377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:22:38.246890  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:22:38.246914  182377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:22:38.248534  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248638  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248957  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249338  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249373  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249432  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249474  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249621  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249648  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249665  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249857  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249989  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.250917  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251364  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.251395  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251570  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.548715  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:38.574744  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:38.574851  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:38.594161  182377 api_server.go:72] duration metric: took 355.284664ms to wait for apiserver process to appear ...
	I1026 15:22:38.594202  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:38.594226  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:38.599953  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:38.601088  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:38.601116  182377 api_server.go:131] duration metric: took 6.905101ms to wait for apiserver health ...
	I1026 15:22:38.601130  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:38.604838  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:38.604863  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:38.604872  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:38.604886  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:38.604917  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:38.604924  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running
	I1026 15:22:38.604930  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:38.604934  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:38.604940  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running
	I1026 15:22:38.604945  182377 system_pods.go:74] duration metric: took 3.809261ms to wait for pod list to return data ...
	I1026 15:22:38.604952  182377 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:38.607878  182377 default_sa.go:45] found service account: "default"
	I1026 15:22:38.607900  182377 default_sa.go:55] duration metric: took 2.941228ms for default service account to be created ...
	I1026 15:22:38.607913  182377 kubeadm.go:586] duration metric: took 369.045368ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:38.607930  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:38.610509  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:38.610524  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:38.610536  182377 node_conditions.go:105] duration metric: took 2.601775ms to run NodePressure ...
	I1026 15:22:38.610549  182377 start.go:241] waiting for startup goroutines ...
	I1026 15:22:38.736034  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.789628  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.810637  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:38.810662  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:38.831863  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:38.831893  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:38.877236  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:38.877280  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:38.881939  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:38.881971  182377 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:38.934545  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:38.934581  182377 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:38.950819  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:38.950852  182377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:38.995779  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:39.021057  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:39.021079  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:39.079563  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:39.079594  182377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:39.132351  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:39.132382  182377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:39.193426  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:39.193470  182377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:39.235471  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:39.235496  182377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:39.271746  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:39.271773  182377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:39.307718  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:40.193013  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.403339708s)
	I1026 15:22:40.408827  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.413001507s)
	I1026 15:22:40.408876  182377 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574718"
	I1026 15:22:40.667395  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359629965s)
	I1026 15:22:40.668723  182377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574718 addons enable metrics-server
	
	I1026 15:22:40.669858  182377 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:40.671055  182377 addons.go:514] duration metric: took 2.432108694s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:40.671096  182377 start.go:246] waiting for cluster config update ...
	I1026 15:22:40.671111  182377 start.go:255] writing updated cluster config ...
	I1026 15:22:40.671384  182377 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:40.721560  182377 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:40.722854  182377 out.go:179] * Done! kubectl is now configured to use "newest-cni-574718" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.679277790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492637679253347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95d4dae9-bafc-44a7-9ff0-2989263736a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.679777247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cf90814-32cc-4c8f-933b-4e1820a3e050 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.679850808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cf90814-32cc-4c8f-933b-4e1820a3e050 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.680092352Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492465340193476,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=2cf90814-32cc-4c8f-933b-4e1820a3e050 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.721880566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=575e84ff-0dfa-4652-87f2-55ef486d4ab4 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.721968448Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=575e84ff-0dfa-4652-87f2-55ef486d4ab4 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.724455927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01130578-b5e3-4510-b54c-a2aac7a3a56a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.725071887Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492637725048347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01130578-b5e3-4510-b54c-a2aac7a3a56a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.725800998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4915f21e-a50f-4379-b049-10a1b47dfde0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.725914355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4915f21e-a50f-4379-b049-10a1b47dfde0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.726499786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492465340193476,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=4915f21e-a50f-4379-b049-10a1b47dfde0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.762415890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e80bf8a-894b-463f-8235-485908304f3f name=/runtime.v1.RuntimeService/Version
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.762517357Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e80bf8a-894b-463f-8235-485908304f3f name=/runtime.v1.RuntimeService/Version
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.764013467Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c76ef0ff-8a13-4353-855d-2c4b32b5a70a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.764465509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492637764442411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c76ef0ff-8a13-4353-855d-2c4b32b5a70a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.765242293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0bcba8a-51aa-45cb-82ab-1d35233252d9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.765341829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0bcba8a-51aa-45cb-82ab-1d35233252d9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.765754814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492465340193476,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=d0bcba8a-51aa-45cb-82ab-1d35233252d9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.811759759Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2dbcf587-b65e-4a6e-a014-a06ccec33e5d name=/runtime.v1.RuntimeService/Version
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.811835263Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2dbcf587-b65e-4a6e-a014-a06ccec33e5d name=/runtime.v1.RuntimeService/Version
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.813007216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a00d2f4b-fab3-4af8-8da4-6407cb6bd851 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.813459200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492637813435757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a00d2f4b-fab3-4af8-8da4-6407cb6bd851 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.813955286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8e44d75-3f80-4935-991c-95b5d65e0791 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.814006573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8e44d75-3f80-4935-991c-95b5d65e0791 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:30:37 embed-certs-163393 crio[883]: time="2025-10-26 15:30:37.814249732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492465340193476,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=a8e44d75-3f80-4935-991c-95b5d65e0791 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	8416bbde0b8bc       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      2 minutes ago       Exited              dashboard-metrics-scraper   6                   e7089f3482787       dashboard-metrics-scraper-6ffb444bf9-rkfts
	0ad56b9c2cf9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         2                   09479676439ef       storage-provisioner
	b09b5bb7efde2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                     1                   c437ab570b9f0       busybox
	de85bea7b642a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   6c7258bb82d04       coredns-66bc5c9577-hhhkv
	897c8e09909af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      9 minutes ago       Running             kube-proxy                  1                   d9aa561c19c38       kube-proxy-b46kz
	59611ca5e91cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         1                   09479676439ef       storage-provisioner
	bea228f4e31fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   be3cb9da5a41e       etcd-embed-certs-163393
	2a03b2dcad177       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      9 minutes ago       Running             kube-scheduler              1                   beb50059bdb48       kube-scheduler-embed-certs-163393
	76d2ff2c19e4d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      9 minutes ago       Running             kube-controller-manager     1                   78e82152150eb       kube-controller-manager-embed-certs-163393
	972f01a2f188c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      9 minutes ago       Running             kube-apiserver              1                   89b8bd3e0cbb7       kube-apiserver-embed-certs-163393
	
	
	==> coredns [de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57956 - 22912 "HINFO IN 5038473847254140814.7620950374588468031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029880705s
	
	
	==> describe nodes <==
	Name:               embed-certs-163393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-163393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-163393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-163393
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:30:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:27:08 +0000   Sun, 26 Oct 2025 15:18:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:27:08 +0000   Sun, 26 Oct 2025 15:18:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:27:08 +0000   Sun, 26 Oct 2025 15:18:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:27:08 +0000   Sun, 26 Oct 2025 15:21:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    embed-certs-163393
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 31c0eca226a441c9a6dfd975a508de47
	  System UUID:                31c0eca2-26a4-41c9-a6df-d975a508de47
	  Boot ID:                    85e8c752-cce9-4b70-b7d5-1ff1562ab03c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hhhkv                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-embed-certs-163393                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-embed-certs-163393             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-163393    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-b46kz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-embed-certs-163393             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-frdcx               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rkfts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nxc8p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m14s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node embed-certs-163393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node embed-certs-163393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node embed-certs-163393 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                    kubelet          Node embed-certs-163393 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node embed-certs-163393 event: Registered Node embed-certs-163393 in Controller
	  Normal   Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node embed-certs-163393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node embed-certs-163393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node embed-certs-163393 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m17s                  kubelet          Node embed-certs-163393 has been rebooted, boot id: 85e8c752-cce9-4b70-b7d5-1ff1562ab03c
	  Normal   RegisteredNode           9m13s                  node-controller  Node embed-certs-163393 event: Registered Node embed-certs-163393 in Controller
	
	
	==> dmesg <==
	[Oct26 15:20] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Oct26 15:21] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001892] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.843151] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.129076] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.093708] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.581132] kauditd_printk_skb: 168 callbacks suppressed
	[  +3.754830] kauditd_printk_skb: 347 callbacks suppressed
	[  +0.036096] kauditd_printk_skb: 11 callbacks suppressed
	[Oct26 15:22] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.857673] kauditd_printk_skb: 32 callbacks suppressed
	[ +18.721574] kauditd_printk_skb: 13 callbacks suppressed
	[ +23.987667] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:23] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:25] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:27] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2] <==
	{"level":"warn","ts":"2025-10-26T15:21:20.508088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.522285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.535384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.546018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.559486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.566408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.575413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.636472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:21:24.998226Z","caller":"traceutil/trace.go:172","msg":"trace[523423115] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"109.682013ms","start":"2025-10-26T15:21:24.888525Z","end":"2025-10-26T15:21:24.998206Z","steps":["trace[523423115] 'process raft request'  (duration: 108.120764ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:21:34.218966Z","caller":"traceutil/trace.go:172","msg":"trace[1663933131] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"184.334714ms","start":"2025-10-26T15:21:34.034608Z","end":"2025-10-26T15:21:34.218943Z","steps":["trace[1663933131] 'process raft request'  (duration: 183.802487ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:21:34.357615Z","caller":"traceutil/trace.go:172","msg":"trace[648416822] linearizableReadLoop","detail":"{readStateIndex:713; appliedIndex:713; }","duration":"116.342744ms","start":"2025-10-26T15:21:34.241227Z","end":"2025-10-26T15:21:34.357570Z","steps":["trace[648416822] 'read index received'  (duration: 116.291615ms)","trace[648416822] 'applied index is now lower than readState.Index'  (duration: 49.88µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:21:34.656118Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"414.869359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-163393\" limit:1 ","response":"range_response_count:1 size:7049"}
	{"level":"info","ts":"2025-10-26T15:21:34.657049Z","caller":"traceutil/trace.go:172","msg":"trace[41570780] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-163393; range_end:; response_count:1; response_revision:666; }","duration":"415.813974ms","start":"2025-10-26T15:21:34.241223Z","end":"2025-10-26T15:21:34.657037Z","steps":["trace[41570780] 'agreement among raft nodes before linearized reading'  (duration: 116.466579ms)","trace[41570780] 'range keys from in-memory index tree'  (duration: 298.298467ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:21:34.657086Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.241201Z","time spent":"415.873715ms","remote":"127.0.0.1:36434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":1,"response size":7072,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-163393\" limit:1 "}
	{"level":"warn","ts":"2025-10-26T15:21:34.656844Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.712389ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16244090372967315732 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:652 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:835 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T15:21:34.657721Z","caller":"traceutil/trace.go:172","msg":"trace[106216093] linearizableReadLoop","detail":"{readStateIndex:714; appliedIndex:713; }","duration":"163.713335ms","start":"2025-10-26T15:21:34.493992Z","end":"2025-10-26T15:21:34.657705Z","steps":["trace[106216093] 'read index received'  (duration: 161.942817ms)","trace[106216093] 'applied index is now lower than readState.Index'  (duration: 1.768258ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:21:34.657882Z","caller":"traceutil/trace.go:172","msg":"trace[471236467] transaction","detail":"{read_only:false; response_revision:669; number_of_response:1; }","duration":"425.24744ms","start":"2025-10-26T15:21:34.232625Z","end":"2025-10-26T15:21:34.657872Z","steps":["trace[471236467] 'process raft request'  (duration: 424.683397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:21:34.657915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.928333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-hhhkv\" limit:1 ","response":"range_response_count:1 size:5458"}
	{"level":"info","ts":"2025-10-26T15:21:34.657978Z","caller":"traceutil/trace.go:172","msg":"trace[460899596] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-hhhkv; range_end:; response_count:1; response_revision:669; }","duration":"163.993817ms","start":"2025-10-26T15:21:34.493973Z","end":"2025-10-26T15:21:34.657967Z","steps":["trace[460899596] 'agreement among raft nodes before linearized reading'  (duration: 163.83791ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:21:34.658090Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.232608Z","time spent":"425.327628ms","remote":"127.0.0.1:36994","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4134,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:577 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4074 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-10-26T15:21:34.658297Z","caller":"traceutil/trace.go:172","msg":"trace[1999106465] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"431.065002ms","start":"2025-10-26T15:21:34.227224Z","end":"2025-10-26T15:21:34.658289Z","steps":["trace[1999106465] 'process raft request'  (duration: 130.517356ms)","trace[1999106465] 'compare'  (duration: 298.563862ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:21:34.658447Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.227207Z","time spent":"431.215255ms","remote":"127.0.0.1:36386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":892,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:652 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:835 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2025-10-26T15:21:34.658774Z","caller":"traceutil/trace.go:172","msg":"trace[1167441621] transaction","detail":"{read_only:false; response_revision:668; number_of_response:1; }","duration":"430.518241ms","start":"2025-10-26T15:21:34.228250Z","end":"2025-10-26T15:21:34.658768Z","steps":["trace[1167441621] 'process raft request'  (duration: 429.016312ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:21:34.659038Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.228236Z","time spent":"430.779375ms","remote":"127.0.0.1:36600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1259,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-f9l6q\" mod_revision:651 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-f9l6q\" value_size:1200 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-f9l6q\" > >"}
	{"level":"warn","ts":"2025-10-26T15:21:58.097300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.184259ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16244090372967315873 > lease_revoke:<id:616e9a211c0f3952>","response":"size:28"}
	
	
	==> kernel <==
	 15:30:38 up 9 min,  0 users,  load average: 0.10, 0.15, 0.10
	Linux embed-certs-163393 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:26:22.364305       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:26:22.364372       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:26:22.364424       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:26:22.365564       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:27:22.365432       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:27:22.365507       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:27:22.365517       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:27:22.365611       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:27:22.365641       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:27:22.366816       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:29:22.366125       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:29:22.366229       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:29:22.366240       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:29:22.367454       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:29:22.367581       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:29:22.367611       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051] <==
	I1026 15:24:25.156804       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:24:55.057602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:24:55.164039       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:25:25.062471       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:25:25.173056       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:25:55.066803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:25:55.182808       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:26:25.071244       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:26:25.191282       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:26:55.075826       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:26:55.198075       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:27:25.080499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:27:25.205449       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:27:55.086731       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:27:55.213571       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:28:25.091357       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:28:25.222402       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:28:55.095373       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:28:55.229958       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:29:25.100164       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:29:25.241327       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:29:55.105139       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:29:55.249421       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:30:25.109992       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:30:25.257902       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef] <==
	I1026 15:21:23.159825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:21:23.261290       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:21:23.261724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.103"]
	E1026 15:21:23.261858       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:21:23.444002       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 15:21:23.444180       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:21:23.444370       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:21:23.457163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:21:23.457608       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:21:23.457693       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:21:23.467906       1 config.go:200] "Starting service config controller"
	I1026 15:21:23.467933       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:21:23.467951       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:21:23.467955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:21:23.467964       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:21:23.467967       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:21:23.470600       1 config.go:309] "Starting node config controller"
	I1026 15:21:23.470622       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:21:23.570416       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:21:23.570480       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:21:23.584868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:21:23.584935       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd] <==
	I1026 15:21:19.520346       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:21:21.296244       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:21:21.296360       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:21:21.296544       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:21:21.296619       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:21:21.401191       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:21:21.401233       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:21:21.407480       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:21:21.407517       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:21:21.409992       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:21:21.410114       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:21:21.508726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:29:56 embed-certs-163393 kubelet[1214]: E1026 15:29:56.463697    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492596463321833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:29:56 embed-certs-163393 kubelet[1214]: E1026 15:29:56.463781    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492596463321833  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:00 embed-certs-163393 kubelet[1214]: I1026 15:30:00.329162    1214 scope.go:117] "RemoveContainer" containerID="8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2"
	Oct 26 15:30:00 embed-certs-163393 kubelet[1214]: E1026 15:30:00.329372    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:30:04 embed-certs-163393 kubelet[1214]: E1026 15:30:04.331280    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	Oct 26 15:30:06 embed-certs-163393 kubelet[1214]: E1026 15:30:06.469645    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492606466407978  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:06 embed-certs-163393 kubelet[1214]: E1026 15:30:06.469738    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492606466407978  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:07 embed-certs-163393 kubelet[1214]: E1026 15:30:07.188702    1214 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 26 15:30:07 embed-certs-163393 kubelet[1214]: E1026 15:30:07.188768    1214 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 26 15:30:07 embed-certs-163393 kubelet[1214]: E1026 15:30:07.188844    1214 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-nxc8p_kubernetes-dashboard(ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 26 15:30:07 embed-certs-163393 kubelet[1214]: E1026 15:30:07.188904    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:30:15 embed-certs-163393 kubelet[1214]: I1026 15:30:15.328181    1214 scope.go:117] "RemoveContainer" containerID="8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2"
	Oct 26 15:30:15 embed-certs-163393 kubelet[1214]: E1026 15:30:15.328320    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:30:16 embed-certs-163393 kubelet[1214]: E1026 15:30:16.472224    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492616471589653  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:16 embed-certs-163393 kubelet[1214]: E1026 15:30:16.472813    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492616471589653  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:18 embed-certs-163393 kubelet[1214]: E1026 15:30:18.329841    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	Oct 26 15:30:20 embed-certs-163393 kubelet[1214]: E1026 15:30:20.330627    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:30:26 embed-certs-163393 kubelet[1214]: E1026 15:30:26.474616    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492626474337924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:26 embed-certs-163393 kubelet[1214]: E1026 15:30:26.474655    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492626474337924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:29 embed-certs-163393 kubelet[1214]: I1026 15:30:29.328689    1214 scope.go:117] "RemoveContainer" containerID="8416bbde0b8bce8d06ac0f909b52f1ee9a921c759b5e0c6367dfc46fd67c5fd2"
	Oct 26 15:30:29 embed-certs-163393 kubelet[1214]: E1026 15:30:29.328856    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:30:32 embed-certs-163393 kubelet[1214]: E1026 15:30:32.330604    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	Oct 26 15:30:33 embed-certs-163393 kubelet[1214]: E1026 15:30:33.332445    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:30:36 embed-certs-163393 kubelet[1214]: E1026 15:30:36.476861    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492636476588663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:36 embed-certs-163393 kubelet[1214]: E1026 15:30:36.476883    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492636476588663  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0] <==
	W1026 15:30:13.471495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:15.475007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:15.479221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:17.482691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:17.487462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:19.492087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:19.500059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:21.503887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:21.508403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:23.511602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:23.515938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:25.519456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:25.527843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:27.530930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:27.537859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:29.540963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:29.545308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:31.548277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:31.554915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:33.558684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:33.566324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:35.569928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:35.575513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:37.580739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:30:37.588061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb] <==
	I1026 15:21:22.986162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:21:52.997101       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-163393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-163393 describe pod metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-163393 describe pod metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p: exit status 1 (60.174986ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-frdcx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nxc8p" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-163393 describe pod metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c8wqg" [cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 15:22:24.824640  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:24.831545  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:24.843504  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:24.864923  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:24.906357  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:24.987892  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:25.149626  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:25.471545  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:26.113357  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:26.546630  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:27.395745  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:29.958136  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:35.080508  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:22:40.503239  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-10-26 15:31:21.018518999 +0000 UTC m=+4587.611950446
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 describe po kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-705037 describe po kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-c8wqg
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-705037/192.168.72.253
Start Time:       Sun, 26 Oct 2025 15:22:08 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjgzl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-sjgzl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m13s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg to default-k8s-diff-port-705037
Warning  Failed     6m49s                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m59s (x5 over 9m12s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m27s (x4 over 8m36s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m27s (x5 over 8m36s)   kubelet            Error: ErrImagePull
Warning  Failed     2m10s (x16 over 8m36s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    67s (x21 over 8m36s)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 logs kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-705037 logs kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard: exit status 1 (73.199719ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-c8wqg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-705037 logs kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-705037 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-705037 logs -n 25: (1.134482569s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ start   │ -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-163393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ image   │ old-k8s-version-065983 image list --format=json                                                                                                                                                                                             │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ unpause │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-705037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ no-preload-758002 image list --format=json                                                                                                                                                                                                  │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ pause   │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ unpause │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-574718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ stop    │ -p newest-cni-574718 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-574718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ newest-cni-574718 image list --format=json                                                                                                                                                                                                  │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ pause   │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ unpause │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:22:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:22:08.024156  182377 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:22:08.024392  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024406  182377 out.go:374] Setting ErrFile to fd 2...
	I1026 15:22:08.024410  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024606  182377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:22:08.025048  182377 out.go:368] Setting JSON to false
	I1026 15:22:08.025981  182377 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7462,"bootTime":1761484666,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:22:08.026077  182377 start.go:141] virtualization: kvm guest
	I1026 15:22:08.027688  182377 out.go:179] * [newest-cni-574718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:22:08.028960  182377 notify.go:220] Checking for updates...
	I1026 15:22:08.028993  182377 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:22:08.030046  182377 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:22:08.031185  182377 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:08.032356  182377 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:22:08.033461  182377 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:22:08.034474  182377 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:22:08.035832  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:08.036313  182377 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:22:08.072389  182377 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 15:22:08.073663  182377 start.go:305] selected driver: kvm2
	I1026 15:22:08.073682  182377 start.go:925] validating driver "kvm2" against &{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.073825  182377 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:22:08.075175  182377 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:08.075218  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:08.075284  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:08.075345  182377 start.go:349] cluster config:
	{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.075449  182377 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:22:08.077008  182377 out.go:179] * Starting "newest-cni-574718" primary control-plane node in "newest-cni-574718" cluster
	I1026 15:22:08.078030  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:08.078073  182377 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:22:08.078088  182377 cache.go:58] Caching tarball of preloaded images
	I1026 15:22:08.078221  182377 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:22:08.078236  182377 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:22:08.078334  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:08.078601  182377 start.go:360] acquireMachinesLock for newest-cni-574718: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:22:08.078675  182377 start.go:364] duration metric: took 45.376µs to acquireMachinesLock for "newest-cni-574718"
	I1026 15:22:08.078701  182377 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:22:08.078711  182377 fix.go:54] fixHost starting: 
	I1026 15:22:08.080626  182377 fix.go:112] recreateIfNeeded on newest-cni-574718: state=Stopped err=<nil>
	W1026 15:22:08.080669  182377 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:22:06.333558  181858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:06.357436  181858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360857  181858 node_ready.go:49] node "default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:06.360901  181858 node_ready.go:38] duration metric: took 3.362736ms for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360919  181858 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:06.360981  181858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:06.385860  181858 api_server.go:72] duration metric: took 266.62216ms to wait for apiserver process to appear ...
	I1026 15:22:06.385897  181858 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:06.385937  181858 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1026 15:22:06.392647  181858 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1026 15:22:06.393766  181858 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:06.393803  181858 api_server.go:131] duration metric: took 7.895398ms to wait for apiserver health ...
	I1026 15:22:06.393816  181858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:06.397637  181858 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:06.397674  181858 system_pods.go:61] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.397686  181858 system_pods.go:61] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.397698  181858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.397709  181858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.397718  181858 system_pods.go:61] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.397728  181858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.397746  181858 system_pods.go:61] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.397756  181858 system_pods.go:61] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.397766  181858 system_pods.go:74] duration metric: took 3.941599ms to wait for pod list to return data ...
	I1026 15:22:06.397779  181858 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:06.403865  181858 default_sa.go:45] found service account: "default"
	I1026 15:22:06.403888  181858 default_sa.go:55] duration metric: took 6.102699ms for default service account to be created ...
	I1026 15:22:06.403898  181858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:22:06.408267  181858 system_pods.go:86] 8 kube-system pods found
	I1026 15:22:06.408305  181858 system_pods.go:89] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.408318  181858 system_pods.go:89] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.408330  181858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.408339  181858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.408345  181858 system_pods.go:89] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.408354  181858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.408361  181858 system_pods.go:89] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.408373  181858 system_pods.go:89] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.408383  181858 system_pods.go:126] duration metric: took 4.477868ms to wait for k8s-apps to be running ...
	I1026 15:22:06.408393  181858 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:22:06.408450  181858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:22:06.432635  181858 system_svc.go:56] duration metric: took 24.227246ms WaitForService to wait for kubelet
	I1026 15:22:06.432676  181858 kubeadm.go:586] duration metric: took 313.448447ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:22:06.432702  181858 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:06.435956  181858 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:06.435988  181858 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:06.436002  181858 node_conditions.go:105] duration metric: took 3.294076ms to run NodePressure ...
	I1026 15:22:06.436018  181858 start.go:241] waiting for startup goroutines ...
	I1026 15:22:06.515065  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:06.572989  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:06.584697  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:06.584737  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:06.595077  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:06.595106  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:06.638704  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:06.638736  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:06.659544  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:06.659582  181858 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:06.702281  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.702320  181858 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:06.711972  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:06.712006  181858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:06.757866  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.788030  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:06.788064  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:06.847661  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:06.847708  181858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:06.929153  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:06.929177  181858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:06.986412  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:06.986448  181858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:07.045193  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:07.045218  181858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:07.093617  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:07.093654  181858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:07.162711  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:08.298101  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.725070201s)
	I1026 15:22:08.369209  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.61128174s)
	I1026 15:22:08.369257  181858 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-705037"
	I1026 15:22:08.605124  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.442357492s)
	I1026 15:22:08.606598  181858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-705037 addons enable metrics-server
	
	I1026 15:22:08.607892  181858 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:08.609005  181858 addons.go:514] duration metric: took 2.489743866s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:08.609043  181858 start.go:246] waiting for cluster config update ...
	I1026 15:22:08.609058  181858 start.go:255] writing updated cluster config ...
	I1026 15:22:08.609345  181858 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:08.616260  181858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:08.620760  181858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:10.628668  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:08.082049  182377 out.go:252] * Restarting existing kvm2 VM for "newest-cni-574718" ...
	I1026 15:22:08.082089  182377 main.go:141] libmachine: starting domain...
	I1026 15:22:08.082102  182377 main.go:141] libmachine: ensuring networks are active...
	I1026 15:22:08.083029  182377 main.go:141] libmachine: Ensuring network default is active
	I1026 15:22:08.083543  182377 main.go:141] libmachine: Ensuring network mk-newest-cni-574718 is active
	I1026 15:22:08.084108  182377 main.go:141] libmachine: getting domain XML...
	I1026 15:22:08.085257  182377 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-574718</name>
	  <uuid>3e8359f9-dc38-4472-b6d3-ffe603a5ee64</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/newest-cni-574718.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:b5:97'/>
	      <source network='mk-newest-cni-574718'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a1:2e:d8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:22:09.396910  182377 main.go:141] libmachine: waiting for domain to start...
	I1026 15:22:09.398416  182377 main.go:141] libmachine: domain is now running
	I1026 15:22:09.398445  182377 main.go:141] libmachine: waiting for IP...
	I1026 15:22:09.399448  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400230  182377 main.go:141] libmachine: domain newest-cni-574718 has current primary IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400244  182377 main.go:141] libmachine: found domain IP: 192.168.61.33
	I1026 15:22:09.400250  182377 main.go:141] libmachine: reserving static IP address...
	I1026 15:22:09.400772  182377 main.go:141] libmachine: found host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.400809  182377 main.go:141] libmachine: skip adding static IP to network mk-newest-cni-574718 - found existing host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"}
	I1026 15:22:09.400837  182377 main.go:141] libmachine: reserved static IP address 192.168.61.33 for domain newest-cni-574718
	I1026 15:22:09.400849  182377 main.go:141] libmachine: waiting for SSH...
	I1026 15:22:09.400857  182377 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:22:09.403391  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.403822  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.403850  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.404075  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:09.404289  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:09.404299  182377 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:22:12.493681  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	W1026 15:22:12.635327  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:14.627621  181858 pod_ready.go:94] pod "coredns-66bc5c9577-fs558" is "Ready"
	I1026 15:22:14.627655  181858 pod_ready.go:86] duration metric: took 6.00687198s for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.630599  181858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.634975  181858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:14.635007  181858 pod_ready.go:86] duration metric: took 4.382539ms for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.637185  181858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:16.644581  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	W1026 15:22:19.144809  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	I1026 15:22:20.143611  181858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.143640  181858 pod_ready.go:86] duration metric: took 5.506432171s for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.145536  181858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.149100  181858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.149131  181858 pod_ready.go:86] duration metric: took 3.572718ms for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.151047  181858 pod_ready.go:83] waiting for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.155496  181858 pod_ready.go:94] pod "kube-proxy-kr5kl" is "Ready"
	I1026 15:22:20.155521  181858 pod_ready.go:86] duration metric: took 4.452008ms for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.157137  181858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424601  181858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.424645  181858 pod_ready.go:86] duration metric: took 267.484691ms for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424664  181858 pod_ready.go:40] duration metric: took 11.808360636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:20.472398  181858 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:20.474272  181858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-705037" cluster and "default" namespace by default
	I1026 15:22:18.573877  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	I1026 15:22:21.678716  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:21.682223  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682617  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.682640  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682859  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:21.683068  182377 machine.go:93] provisionDockerMachine start ...
	I1026 15:22:21.685439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.685814  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.685841  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.686028  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.686280  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.686297  182377 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:22:21.789433  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:22:21.789491  182377 buildroot.go:166] provisioning hostname "newest-cni-574718"
	I1026 15:22:21.792404  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.792911  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.792937  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.793176  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.793395  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.793410  182377 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574718 && echo "newest-cni-574718" | sudo tee /etc/hostname
	I1026 15:22:21.914128  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574718
	
	I1026 15:22:21.917275  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917738  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.917764  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917937  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.918176  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.918200  182377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574718' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574718/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574718' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:22:22.026151  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:22.026183  182377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:22:22.026217  182377 buildroot.go:174] setting up certificates
	I1026 15:22:22.026229  182377 provision.go:84] configureAuth start
	I1026 15:22:22.029052  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.029554  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.029582  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.031873  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032223  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.032249  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032371  182377 provision.go:143] copyHostCerts
	I1026 15:22:22.032450  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:22:22.032491  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:22:22.032577  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:22:22.032704  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:22:22.032719  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:22:22.032762  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:22:22.032845  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:22:22.032855  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:22:22.032893  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:22:22.032958  182377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574718 san=[127.0.0.1 192.168.61.33 localhost minikube newest-cni-574718]
	I1026 15:22:22.469944  182377 provision.go:177] copyRemoteCerts
	I1026 15:22:22.470018  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:22:22.472561  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.472948  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.472970  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.473117  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.554777  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:22:22.582124  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:22:22.610149  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:22:22.638169  182377 provision.go:87] duration metric: took 611.92185ms to configureAuth
	I1026 15:22:22.638199  182377 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:22:22.638398  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:22.641177  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641627  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.641657  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641842  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:22.642047  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:22.642063  182377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:22:22.906384  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:22:22.906420  182377 machine.go:96] duration metric: took 1.223336761s to provisionDockerMachine
	I1026 15:22:22.906434  182377 start.go:293] postStartSetup for "newest-cni-574718" (driver="kvm2")
	I1026 15:22:22.906449  182377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:22:22.906556  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:22:22.909934  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910412  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.910439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910638  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.992977  182377 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:22:22.997825  182377 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:22:22.997860  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:22:22.997933  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:22:22.998039  182377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:22:22.998136  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:22:23.009341  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:23.040890  182377 start.go:296] duration metric: took 134.438124ms for postStartSetup
	I1026 15:22:23.040950  182377 fix.go:56] duration metric: took 14.962237903s for fixHost
	I1026 15:22:23.044164  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044594  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.044630  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044933  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:23.045233  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:23.045254  182377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:22:23.147520  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761492143.098139468
	
	I1026 15:22:23.147547  182377 fix.go:216] guest clock: 1761492143.098139468
	I1026 15:22:23.147556  182377 fix.go:229] Guest: 2025-10-26 15:22:23.098139468 +0000 UTC Remote: 2025-10-26 15:22:23.04095679 +0000 UTC m=+15.073904102 (delta=57.182678ms)
	I1026 15:22:23.147581  182377 fix.go:200] guest clock delta is within tolerance: 57.182678ms
	I1026 15:22:23.147589  182377 start.go:83] releasing machines lock for "newest-cni-574718", held for 15.068897915s
	I1026 15:22:23.150728  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151142  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.151167  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151719  182377 ssh_runner.go:195] Run: cat /version.json
	I1026 15:22:23.151804  182377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:22:23.155059  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155294  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155561  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155595  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155739  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.155910  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155945  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.156130  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.231442  182377 ssh_runner.go:195] Run: systemctl --version
	I1026 15:22:23.263168  182377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:22:23.405941  182377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:22:23.412607  182377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:22:23.412693  182377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:22:23.431222  182377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:22:23.431247  182377 start.go:495] detecting cgroup driver to use...
	I1026 15:22:23.431329  182377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:22:23.449871  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:22:23.466135  182377 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:22:23.466207  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:22:23.483845  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:22:23.499194  182377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:22:23.646146  182377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:22:23.864499  182377 docker.go:234] disabling docker service ...
	I1026 15:22:23.864576  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:22:23.882304  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:22:23.897571  182377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:22:24.064966  182377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:22:24.201804  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:22:24.216914  182377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:22:24.239366  182377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:22:24.239426  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.251236  182377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:22:24.251318  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.263630  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.275134  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.287125  182377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:22:24.302136  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.315011  182377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.335688  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.347573  182377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:22:24.358181  182377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:22:24.358260  182377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:22:24.379177  182377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:22:24.391253  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:24.532080  182377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:22:24.652383  182377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:22:24.652516  182377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:22:24.658249  182377 start.go:563] Will wait 60s for crictl version
	I1026 15:22:24.658308  182377 ssh_runner.go:195] Run: which crictl
	I1026 15:22:24.662623  182377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:22:24.701747  182377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:22:24.701833  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.730381  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.761145  182377 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:22:24.764994  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765410  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:24.765433  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765621  182377 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 15:22:24.770397  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:24.787194  182377 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:22:24.788437  182377 kubeadm.go:883] updating cluster {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:22:24.788570  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:24.788622  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:24.828217  182377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:22:24.828316  182377 ssh_runner.go:195] Run: which lz4
	I1026 15:22:24.833073  182377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:22:24.838213  182377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:22:24.838246  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 15:22:26.232172  182377 crio.go:462] duration metric: took 1.399140151s to copy over tarball
	I1026 15:22:26.232290  182377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:22:28.031969  182377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.79963377s)
	I1026 15:22:28.032009  182377 crio.go:469] duration metric: took 1.799794706s to extract the tarball
	I1026 15:22:28.032019  182377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:22:28.083266  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:28.129231  182377 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:22:28.129262  182377 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:22:28.129271  182377 kubeadm.go:934] updating node { 192.168.61.33 8443 v1.34.1 crio true true} ...
	I1026 15:22:28.129386  182377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574718 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:22:28.129473  182377 ssh_runner.go:195] Run: crio config
	I1026 15:22:28.175414  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:28.175448  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:28.175493  182377 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:22:28.175532  182377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574718 NodeName:newest-cni-574718 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:22:28.175679  182377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574718"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:22:28.175746  182377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:22:28.189114  182377 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:22:28.189184  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:22:28.201285  182377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1026 15:22:28.222167  182377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:22:28.241882  182377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:22:28.262267  182377 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1026 15:22:28.266495  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:28.281183  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:28.445545  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:28.481631  182377 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718 for IP: 192.168.61.33
	I1026 15:22:28.481655  182377 certs.go:195] generating shared ca certs ...
	I1026 15:22:28.481672  182377 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:28.481853  182377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:22:28.481904  182377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:22:28.481916  182377 certs.go:257] generating profile certs ...
	I1026 15:22:28.482010  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/client.key
	I1026 15:22:28.482074  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key.59f77b64
	I1026 15:22:28.482115  182377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key
	I1026 15:22:28.482217  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:22:28.482254  182377 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:22:28.482262  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:22:28.482285  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:22:28.482316  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:22:28.482340  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:22:28.482379  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:28.483044  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:22:28.517526  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:22:28.558414  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:22:28.586297  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:22:28.613805  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:22:28.642929  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:22:28.671810  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:22:28.700191  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:22:28.729422  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:22:28.756494  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:22:28.783988  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:22:28.812588  182377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:22:28.832551  182377 ssh_runner.go:195] Run: openssl version
	I1026 15:22:28.838355  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:22:28.850638  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855574  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855636  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.862555  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:22:28.874412  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:22:28.886395  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891025  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891082  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.897923  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:22:28.910115  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:22:28.922622  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927296  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927337  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.934138  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:22:28.945693  182377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:22:28.950557  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:22:28.957416  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:22:28.964523  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:22:28.971586  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:22:28.978762  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:22:28.986053  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:22:28.993134  182377 kubeadm.go:400] StartCluster: {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:28.993263  182377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:22:28.993323  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.032028  182377 cri.go:89] found id: ""
	I1026 15:22:29.032103  182377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:22:29.043952  182377 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:22:29.043972  182377 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:22:29.044040  182377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:22:29.056289  182377 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:22:29.057119  182377 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574718" does not appear in /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:29.057648  182377 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-137233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574718" cluster setting kubeconfig missing "newest-cni-574718" context setting]
	I1026 15:22:29.058341  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:29.060135  182377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:22:29.070432  182377 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.61.33
	I1026 15:22:29.070477  182377 kubeadm.go:1160] stopping kube-system containers ...
	I1026 15:22:29.070498  182377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 15:22:29.070565  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.108499  182377 cri.go:89] found id: ""
	I1026 15:22:29.108625  182377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 15:22:29.128646  182377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:22:29.140200  182377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:22:29.140217  182377 kubeadm.go:157] found existing configuration files:
	
	I1026 15:22:29.140259  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:22:29.150547  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:22:29.150618  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:22:29.161551  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:22:29.171576  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:22:29.171637  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:22:29.182113  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.191928  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:22:29.191975  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.202335  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:22:29.212043  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:22:29.212089  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:22:29.222315  182377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:22:29.232961  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:29.285078  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:30.940058  182377 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.654938215s)
	I1026 15:22:30.940132  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.190262  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.246873  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.330409  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:31.330532  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:31.830602  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.330655  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.830666  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.330601  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.376334  182377 api_server.go:72] duration metric: took 2.045939712s to wait for apiserver process to appear ...
	I1026 15:22:33.376368  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:33.376393  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:33.377001  182377 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": dial tcp 192.168.61.33:8443: connect: connection refused
	I1026 15:22:33.876665  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.154624  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.154676  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.154695  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.184996  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.185030  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.377426  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.382349  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.382371  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:36.876548  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.881970  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.882006  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.376698  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.384123  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:37.384156  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.876774  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.882031  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:37.891824  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:37.891850  182377 api_server.go:131] duration metric: took 4.515475379s to wait for apiserver health ...
	I1026 15:22:37.891861  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:37.891868  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:37.893513  182377 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:22:37.894739  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:22:37.909012  182377 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:22:37.935970  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:37.941779  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:37.941822  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:37.941834  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:37.941848  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:37.941862  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:37.941873  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:22:37.941878  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:37.941884  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:37.941889  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:22:37.941897  182377 system_pods.go:74] duration metric: took 5.899576ms to wait for pod list to return data ...
	I1026 15:22:37.941906  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:37.946827  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:37.946868  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:37.946885  182377 node_conditions.go:105] duration metric: took 4.973356ms to run NodePressure ...
	I1026 15:22:37.946955  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:38.207008  182377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:22:38.236075  182377 ops.go:34] apiserver oom_adj: -16
	I1026 15:22:38.236107  182377 kubeadm.go:601] duration metric: took 9.192128682s to restartPrimaryControlPlane
	I1026 15:22:38.236126  182377 kubeadm.go:402] duration metric: took 9.243002383s to StartCluster
	I1026 15:22:38.236154  182377 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.236270  182377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:38.238433  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.238827  182377 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:22:38.238959  182377 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:22:38.239088  182377 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574718"
	I1026 15:22:38.239110  182377 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574718"
	W1026 15:22:38.239120  182377 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:22:38.239127  182377 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574718"
	I1026 15:22:38.239155  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239168  182377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574718"
	I1026 15:22:38.239190  182377 addons.go:69] Setting dashboard=true in profile "newest-cni-574718"
	I1026 15:22:38.239234  182377 addons.go:238] Setting addon dashboard=true in "newest-cni-574718"
	W1026 15:22:38.239252  182377 addons.go:247] addon dashboard should already be in state true
	I1026 15:22:38.239176  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:38.239296  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239172  182377 addons.go:69] Setting metrics-server=true in profile "newest-cni-574718"
	I1026 15:22:38.239373  182377 addons.go:238] Setting addon metrics-server=true in "newest-cni-574718"
	W1026 15:22:38.239384  182377 addons.go:247] addon metrics-server should already be in state true
	I1026 15:22:38.239411  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.240384  182377 out.go:179] * Verifying Kubernetes components...
	I1026 15:22:38.241817  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:38.243158  182377 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574718"
	W1026 15:22:38.243174  182377 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:22:38.243191  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.243431  182377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:22:38.243449  182377 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 15:22:38.243435  182377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:22:38.244547  182377 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.244562  182377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:22:38.244795  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 15:22:38.244828  182377 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 15:22:38.244850  182377 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.244868  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:22:38.245802  182377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:22:38.246890  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:22:38.246914  182377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:22:38.248534  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248638  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248957  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249338  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249373  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249432  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249474  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249621  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249648  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249665  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249857  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249989  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.250917  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251364  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.251395  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251570  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.548715  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:38.574744  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:38.574851  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:38.594161  182377 api_server.go:72] duration metric: took 355.284664ms to wait for apiserver process to appear ...
	I1026 15:22:38.594202  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:38.594226  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:38.599953  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:38.601088  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:38.601116  182377 api_server.go:131] duration metric: took 6.905101ms to wait for apiserver health ...
	I1026 15:22:38.601130  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:38.604838  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:38.604863  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:38.604872  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:38.604886  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:38.604917  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:38.604924  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running
	I1026 15:22:38.604930  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:38.604934  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:38.604940  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running
	I1026 15:22:38.604945  182377 system_pods.go:74] duration metric: took 3.809261ms to wait for pod list to return data ...
	I1026 15:22:38.604952  182377 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:38.607878  182377 default_sa.go:45] found service account: "default"
	I1026 15:22:38.607900  182377 default_sa.go:55] duration metric: took 2.941228ms for default service account to be created ...
	I1026 15:22:38.607913  182377 kubeadm.go:586] duration metric: took 369.045368ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:38.607930  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:38.610509  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:38.610524  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:38.610536  182377 node_conditions.go:105] duration metric: took 2.601775ms to run NodePressure ...
	I1026 15:22:38.610549  182377 start.go:241] waiting for startup goroutines ...
	I1026 15:22:38.736034  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.789628  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.810637  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:38.810662  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:38.831863  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:38.831893  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:38.877236  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:38.877280  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:38.881939  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:38.881971  182377 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:38.934545  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:38.934581  182377 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:38.950819  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:38.950852  182377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:38.995779  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:39.021057  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:39.021079  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:39.079563  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:39.079594  182377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:39.132351  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:39.132382  182377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:39.193426  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:39.193470  182377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:39.235471  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:39.235496  182377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:39.271746  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:39.271773  182377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:39.307718  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:40.193013  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.403339708s)
	I1026 15:22:40.408827  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.413001507s)
	I1026 15:22:40.408876  182377 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574718"
	I1026 15:22:40.667395  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359629965s)
	I1026 15:22:40.668723  182377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574718 addons enable metrics-server
	
	I1026 15:22:40.669858  182377 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:40.671055  182377 addons.go:514] duration metric: took 2.432108694s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:40.671096  182377 start.go:246] waiting for cluster config update ...
	I1026 15:22:40.671111  182377 start.go:255] writing updated cluster config ...
	I1026 15:22:40.671384  182377 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:40.721560  182377 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:40.722854  182377 out.go:179] * Done! kubectl is now configured to use "newest-cni-574718" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.825340612Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492681825316416,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e79274ea-bc9e-4e0b-aee3-436c9891ec81 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.825876301Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89ad5fb3-86c2-48be-b562-da0933bc26d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.826002362Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89ad5fb3-86c2-48be-b562-da0933bc26d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.826254165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492470725765720,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=89ad5fb3-86c2-48be-b562-da0933bc26d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.864034575Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3acceb5-67be-4270-8262-2a43c0d925de name=/runtime.v1.RuntimeService/Version
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.864259085Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3acceb5-67be-4270-8262-2a43c0d925de name=/runtime.v1.RuntimeService/Version
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.865588059Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f803a31c-a268-46d7-8439-54ae26ab2ff6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.866091671Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492681866069566,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f803a31c-a268-46d7-8439-54ae26ab2ff6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.866956533Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f0eab95-0b46-4e4f-a02a-067364b5777e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.867117681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f0eab95-0b46-4e4f-a02a-067364b5777e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.867370254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492470725765720,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f0eab95-0b46-4e4f-a02a-067364b5777e name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.900394646Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0cb4b4ba-af04-4e70-bd25-5ad898536297 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.900466013Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cb4b4ba-af04-4e70-bd25-5ad898536297 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.902271052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7be243b3-b4f6-4e6a-9718-68725885deff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.902761353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492681902686086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7be243b3-b4f6-4e6a-9718-68725885deff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.903473568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e57e1895-9992-49bf-ae94-f3857e3884fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.903538780Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e57e1895-9992-49bf-ae94-f3857e3884fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.903753536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492470725765720,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=e57e1895-9992-49bf-ae94-f3857e3884fb name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.945041044Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24389356-dded-44ee-96e4-54c8f1549282 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.945147684Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24389356-dded-44ee-96e4-54c8f1549282 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.946491051Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=36785c55-29d4-4dba-8cfc-7cfe450f7891 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.947147090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761492681947096386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=36785c55-29d4-4dba-8cfc-7cfe450f7891 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.947798743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88912406-6211-4d01-b139-aad2a9e095b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.947854293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88912406-6211-4d01-b139-aad2a9e095b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:31:21 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:31:21.948102033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761492470725765720,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=88912406-6211-4d01-b139-aad2a9e095b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9a461a7dba024       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      3 minutes ago       Exited              dashboard-metrics-scraper   6                   0db4ee11cf8f6       dashboard-metrics-scraper-6ffb444bf9-k9ssm
	78f2d85d5d389       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   8 minutes ago       Running             busybox                     1                   bcd0a7ea7a5d6       busybox
	e6dc43f94cb76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         3                   e9e54484fe80f       storage-provisioner
	0412bc06733f8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   1592430b39646       coredns-66bc5c9577-fs558
	67a44ce6a7fe8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         2                   e9e54484fe80f       storage-provisioner
	e941043507acf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      9 minutes ago       Running             kube-proxy                  1                   885c149c4d0c4       kube-proxy-kr5kl
	4dd8207831f2d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      9 minutes ago       Running             kube-scheduler              1                   393a00e3f8d41       kube-scheduler-default-k8s-diff-port-705037
	b62b8fa07e019       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   9bb3db4855e82       etcd-default-k8s-diff-port-705037
	d681a01f97923       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      9 minutes ago       Running             kube-controller-manager     1                   0f393b3f130a6       kube-controller-manager-default-k8s-diff-port-705037
	1cf3d81d69ccc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      9 minutes ago       Running             kube-apiserver              1                   d0d99bc0545f2       kube-apiserver-default-k8s-diff-port-705037
	
	
	==> coredns [0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33122 - 62530 "HINFO IN 6525439122859490430.3700641182551545693. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029488252s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-705037
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-705037
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-705037
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_19_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:19:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-705037
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:31:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:27:09 +0000   Sun, 26 Oct 2025 15:19:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:27:09 +0000   Sun, 26 Oct 2025 15:19:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:27:09 +0000   Sun, 26 Oct 2025 15:19:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:27:09 +0000   Sun, 26 Oct 2025 15:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.253
	  Hostname:    default-k8s-diff-port-705037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a056f452638844dc8e66f24d5e133cba
	  System UUID:                a056f452-6388-44dc-8e66-f24d5e133cba
	  Boot ID:                    2f85c34a-af7e-46e9-ad10-a1b5ca5b3806
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-fs558                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-default-k8s-diff-port-705037                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-705037             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-705037    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kr5kl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-705037             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-nsvb5                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k9ssm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c8wqg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m17s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                    kubelet          Node default-k8s-diff-port-705037 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node default-k8s-diff-port-705037 event: Registered Node default-k8s-diff-port-705037 in Controller
	  Normal   Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m19s                  kubelet          Node default-k8s-diff-port-705037 has been rebooted, boot id: 2f85c34a-af7e-46e9-ad10-a1b5ca5b3806
	  Normal   RegisteredNode           9m15s                  node-controller  Node default-k8s-diff-port-705037 event: Registered Node default-k8s-diff-port-705037 in Controller
	
	
	==> dmesg <==
	[Oct26 15:21] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000998] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.786519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.124278] kauditd_printk_skb: 88 callbacks suppressed
	[Oct26 15:22] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.077380] kauditd_printk_skb: 218 callbacks suppressed
	[  +1.602137] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.034945] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.623285] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.030191] kauditd_printk_skb: 5 callbacks suppressed
	[Oct26 15:23] kauditd_printk_skb: 27 callbacks suppressed
	[Oct26 15:25] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:27] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047] <==
	{"level":"warn","ts":"2025-10-26T15:22:02.258967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.278136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.286285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.305319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.319540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.338450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.352132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.378078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.405427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.419887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.440847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.454673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.462578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.480315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.488679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.500989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.522556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.535041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.547169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.558621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.577356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.584078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.593999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.705984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:31.020266Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.152639ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7885127989601838997 > lease_revoke:<id:6d6d9a211cb5271f>","response":"size:28"}
	
	
	==> kernel <==
	 15:31:22 up 9 min,  0 users,  load average: 0.15, 0.12, 0.08
	Linux default-k8s-diff-port-705037 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2] <==
	E1026 15:27:04.481063       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 15:27:04.481065       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:27:04.481072       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 15:27:04.482214       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:28:04.482027       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:28:04.482093       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:28:04.482104       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:28:04.482370       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:28:04.482394       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:28:04.484118       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:30:04.483103       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:30:04.483209       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:30:04.483221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:30:04.485285       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:30:04.485312       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:30:04.485324       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b] <==
	I1026 15:25:07.961401       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:25:37.956153       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:25:37.968402       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:26:07.961308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:26:07.976744       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:26:37.965767       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:26:37.986804       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:27:07.970339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:27:07.993434       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:27:37.975080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:27:38.000195       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:28:07.982410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:28:08.009271       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:28:37.986993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:28:38.016581       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:29:07.991637       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:29:08.024046       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:29:37.996551       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:29:38.031737       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:30:08.001259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:30:08.038730       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:30:38.006027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:30:38.047203       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:31:08.010853       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:31:08.054349       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38] <==
	I1026 15:22:04.292327       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:22:04.394477       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:22:04.394520       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.253"]
	E1026 15:22:04.394617       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:22:04.469563       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 15:22:04.469654       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:22:04.469720       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:22:04.508220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:22:04.508746       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:22:04.508807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:22:04.514600       1 config.go:200] "Starting service config controller"
	I1026 15:22:04.514664       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:22:04.514682       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:22:04.514686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:22:04.514695       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:22:04.514780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:22:04.522689       1 config.go:309] "Starting node config controller"
	I1026 15:22:04.523596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:22:04.523851       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:22:04.614825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:22:04.614865       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:22:04.614883       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa] <==
	I1026 15:22:03.395771       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:22:03.402633       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:22:03.402745       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:22:03.402771       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:22:03.403484       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 15:22:03.443272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:22:03.443366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:22:03.443873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:22:03.444333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:22:03.444416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:22:03.444498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:22:03.444553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:22:03.444644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:22:03.444737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:22:03.444804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:22:03.444869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:22:03.447027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:22:03.447128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:22:03.447187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:22:03.447252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:22:03.447373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:22:03.447436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:22:03.447497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:22:03.448798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1026 15:22:04.303395       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:30:27 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:27.715319    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg" podUID="cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4"
	Oct 26 15:30:29 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:29.846728    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492629846257241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:29 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:29.846767    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492629846257241  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:31 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:31.714670    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:30:32 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:30:32.713370    1214 scope.go:117] "RemoveContainer" containerID="9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132"
	Oct 26 15:30:32 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:32.713628    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:30:39 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:39.849115    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492639848779763  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:39 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:39.849152    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492639848779763  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:43 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:43.715525    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:30:44 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:30:44.713200    1214 scope.go:117] "RemoveContainer" containerID="9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132"
	Oct 26 15:30:44 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:44.713643    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:30:49 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:49.850663    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492649850094592  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:49 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:49.850683    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492649850094592  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:57 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:57.714646    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:30:58 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:30:58.713893    1214 scope.go:117] "RemoveContainer" containerID="9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132"
	Oct 26 15:30:58 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:58.714235    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:30:59 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:59.851856    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492659851547530  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:30:59 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:30:59.851890    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492659851547530  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:31:09 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:31:09.715797    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:31:09 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:31:09.856078    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492669855723034  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:31:09 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:31:09.856115    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492669855723034  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:31:10 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:31:10.712720    1214 scope.go:117] "RemoveContainer" containerID="9a461a7dba024fb23738d1dd34dc0d154211f607d0f0887a2098a7dc8a6a7132"
	Oct 26 15:31:10 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:31:10.712947    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:31:19 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:31:19.858471    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761492679858081403  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:31:19 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:31:19.858489    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761492679858081403  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557] <==
	I1026 15:22:04.227382       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:22:34.231775       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0] <==
	W1026 15:30:58.171527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:00.174232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:00.178612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:02.182281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:02.186521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:04.190796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:04.196857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:06.200523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:06.204732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:08.208130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:08.213456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:10.216434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:10.224475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:12.228358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:12.233383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:14.236996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:14.241955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:16.245389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:16.249979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:18.254096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:18.261767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:20.264545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:20.270073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:22.274045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:31:22.279455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 describe pod metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-705037 describe pod metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg: exit status 1 (59.397154ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-nsvb5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c8wqg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-705037 describe pod metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-nxc8p" [ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 15:31:18.564716  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-10-26 15:39:38.985974492 +0000 UTC m=+5085.579406283
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-163393 describe po kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-163393 describe po kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-nxc8p
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-163393/192.168.39.103
Start Time:       Sun, 26 Oct 2025 15:21:25 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7g7gr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-7g7gr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p to embed-certs-163393
Warning  Failed     17m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x5 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed     12m (x4 over 16m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    3m9s (x47 over 17m)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m33s (x50 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-163393 logs kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-163393 logs kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard: exit status 1 (69.566633ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-nxc8p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-163393 logs kubernetes-dashboard-855c9754f9-nxc8p -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-163393 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-163393 -n embed-certs-163393
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-163393 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-163393 logs -n 25: (1.136256067s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ start   │ -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p embed-certs-163393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ start   │ -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ image   │ old-k8s-version-065983 image list --format=json                                                                                                                                                                                             │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ pause   │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ unpause │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-705037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ no-preload-758002 image list --format=json                                                                                                                                                                                                  │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ pause   │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ unpause │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-574718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ stop    │ -p newest-cni-574718 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-574718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ newest-cni-574718 image list --format=json                                                                                                                                                                                                  │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ pause   │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ unpause │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:22:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:22:08.024156  182377 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:22:08.024392  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024406  182377 out.go:374] Setting ErrFile to fd 2...
	I1026 15:22:08.024410  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024606  182377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:22:08.025048  182377 out.go:368] Setting JSON to false
	I1026 15:22:08.025981  182377 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7462,"bootTime":1761484666,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:22:08.026077  182377 start.go:141] virtualization: kvm guest
	I1026 15:22:08.027688  182377 out.go:179] * [newest-cni-574718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:22:08.028960  182377 notify.go:220] Checking for updates...
	I1026 15:22:08.028993  182377 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:22:08.030046  182377 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:22:08.031185  182377 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:08.032356  182377 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:22:08.033461  182377 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:22:08.034474  182377 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:22:08.035832  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:08.036313  182377 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:22:08.072389  182377 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 15:22:08.073663  182377 start.go:305] selected driver: kvm2
	I1026 15:22:08.073682  182377 start.go:925] validating driver "kvm2" against &{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.073825  182377 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:22:08.075175  182377 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:08.075218  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:08.075284  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:08.075345  182377 start.go:349] cluster config:
	{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.075449  182377 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:22:08.077008  182377 out.go:179] * Starting "newest-cni-574718" primary control-plane node in "newest-cni-574718" cluster
	I1026 15:22:08.078030  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:08.078073  182377 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:22:08.078088  182377 cache.go:58] Caching tarball of preloaded images
	I1026 15:22:08.078221  182377 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:22:08.078236  182377 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:22:08.078334  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:08.078601  182377 start.go:360] acquireMachinesLock for newest-cni-574718: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:22:08.078675  182377 start.go:364] duration metric: took 45.376µs to acquireMachinesLock for "newest-cni-574718"
	I1026 15:22:08.078701  182377 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:22:08.078711  182377 fix.go:54] fixHost starting: 
	I1026 15:22:08.080626  182377 fix.go:112] recreateIfNeeded on newest-cni-574718: state=Stopped err=<nil>
	W1026 15:22:08.080669  182377 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:22:06.333558  181858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:06.357436  181858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360857  181858 node_ready.go:49] node "default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:06.360901  181858 node_ready.go:38] duration metric: took 3.362736ms for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360919  181858 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:06.360981  181858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:06.385860  181858 api_server.go:72] duration metric: took 266.62216ms to wait for apiserver process to appear ...
	I1026 15:22:06.385897  181858 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:06.385937  181858 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1026 15:22:06.392647  181858 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1026 15:22:06.393766  181858 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:06.393803  181858 api_server.go:131] duration metric: took 7.895398ms to wait for apiserver health ...
	I1026 15:22:06.393816  181858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:06.397637  181858 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:06.397674  181858 system_pods.go:61] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.397686  181858 system_pods.go:61] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.397698  181858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.397709  181858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.397718  181858 system_pods.go:61] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.397728  181858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.397746  181858 system_pods.go:61] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.397756  181858 system_pods.go:61] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.397766  181858 system_pods.go:74] duration metric: took 3.941599ms to wait for pod list to return data ...
	I1026 15:22:06.397779  181858 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:06.403865  181858 default_sa.go:45] found service account: "default"
	I1026 15:22:06.403888  181858 default_sa.go:55] duration metric: took 6.102699ms for default service account to be created ...
	I1026 15:22:06.403898  181858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:22:06.408267  181858 system_pods.go:86] 8 kube-system pods found
	I1026 15:22:06.408305  181858 system_pods.go:89] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.408318  181858 system_pods.go:89] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.408330  181858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.408339  181858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.408345  181858 system_pods.go:89] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.408354  181858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.408361  181858 system_pods.go:89] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.408373  181858 system_pods.go:89] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.408383  181858 system_pods.go:126] duration metric: took 4.477868ms to wait for k8s-apps to be running ...
	I1026 15:22:06.408393  181858 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:22:06.408450  181858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:22:06.432635  181858 system_svc.go:56] duration metric: took 24.227246ms WaitForService to wait for kubelet
	I1026 15:22:06.432676  181858 kubeadm.go:586] duration metric: took 313.448447ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:22:06.432702  181858 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:06.435956  181858 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:06.435988  181858 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:06.436002  181858 node_conditions.go:105] duration metric: took 3.294076ms to run NodePressure ...
	I1026 15:22:06.436018  181858 start.go:241] waiting for startup goroutines ...
	I1026 15:22:06.515065  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:06.572989  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:06.584697  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:06.584737  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:06.595077  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:06.595106  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:06.638704  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:06.638736  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:06.659544  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:06.659582  181858 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:06.702281  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.702320  181858 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:06.711972  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:06.712006  181858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:06.757866  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.788030  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:06.788064  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:06.847661  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:06.847708  181858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:06.929153  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:06.929177  181858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:06.986412  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:06.986448  181858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:07.045193  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:07.045218  181858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:07.093617  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:07.093654  181858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:07.162711  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:08.298101  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.725070201s)
	I1026 15:22:08.369209  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.61128174s)
	I1026 15:22:08.369257  181858 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-705037"
	I1026 15:22:08.605124  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.442357492s)
	I1026 15:22:08.606598  181858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-705037 addons enable metrics-server
	
	I1026 15:22:08.607892  181858 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:08.609005  181858 addons.go:514] duration metric: took 2.489743866s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:08.609043  181858 start.go:246] waiting for cluster config update ...
	I1026 15:22:08.609058  181858 start.go:255] writing updated cluster config ...
	I1026 15:22:08.609345  181858 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:08.616260  181858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:08.620760  181858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:10.628668  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:08.082049  182377 out.go:252] * Restarting existing kvm2 VM for "newest-cni-574718" ...
	I1026 15:22:08.082089  182377 main.go:141] libmachine: starting domain...
	I1026 15:22:08.082102  182377 main.go:141] libmachine: ensuring networks are active...
	I1026 15:22:08.083029  182377 main.go:141] libmachine: Ensuring network default is active
	I1026 15:22:08.083543  182377 main.go:141] libmachine: Ensuring network mk-newest-cni-574718 is active
	I1026 15:22:08.084108  182377 main.go:141] libmachine: getting domain XML...
	I1026 15:22:08.085257  182377 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-574718</name>
	  <uuid>3e8359f9-dc38-4472-b6d3-ffe603a5ee64</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/newest-cni-574718.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:b5:97'/>
	      <source network='mk-newest-cni-574718'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a1:2e:d8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:22:09.396910  182377 main.go:141] libmachine: waiting for domain to start...
	I1026 15:22:09.398416  182377 main.go:141] libmachine: domain is now running
	I1026 15:22:09.398445  182377 main.go:141] libmachine: waiting for IP...
	I1026 15:22:09.399448  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400230  182377 main.go:141] libmachine: domain newest-cni-574718 has current primary IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400244  182377 main.go:141] libmachine: found domain IP: 192.168.61.33
	I1026 15:22:09.400250  182377 main.go:141] libmachine: reserving static IP address...
	I1026 15:22:09.400772  182377 main.go:141] libmachine: found host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.400809  182377 main.go:141] libmachine: skip adding static IP to network mk-newest-cni-574718 - found existing host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"}
	I1026 15:22:09.400837  182377 main.go:141] libmachine: reserved static IP address 192.168.61.33 for domain newest-cni-574718
	I1026 15:22:09.400849  182377 main.go:141] libmachine: waiting for SSH...
	I1026 15:22:09.400857  182377 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:22:09.403391  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.403822  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.403850  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.404075  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:09.404289  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:09.404299  182377 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:22:12.493681  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	W1026 15:22:12.635327  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:14.627621  181858 pod_ready.go:94] pod "coredns-66bc5c9577-fs558" is "Ready"
	I1026 15:22:14.627655  181858 pod_ready.go:86] duration metric: took 6.00687198s for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.630599  181858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.634975  181858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:14.635007  181858 pod_ready.go:86] duration metric: took 4.382539ms for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.637185  181858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:16.644581  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	W1026 15:22:19.144809  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	I1026 15:22:20.143611  181858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.143640  181858 pod_ready.go:86] duration metric: took 5.506432171s for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.145536  181858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.149100  181858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.149131  181858 pod_ready.go:86] duration metric: took 3.572718ms for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.151047  181858 pod_ready.go:83] waiting for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.155496  181858 pod_ready.go:94] pod "kube-proxy-kr5kl" is "Ready"
	I1026 15:22:20.155521  181858 pod_ready.go:86] duration metric: took 4.452008ms for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.157137  181858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424601  181858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.424645  181858 pod_ready.go:86] duration metric: took 267.484691ms for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424664  181858 pod_ready.go:40] duration metric: took 11.808360636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:20.472398  181858 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:20.474272  181858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-705037" cluster and "default" namespace by default
	I1026 15:22:18.573877  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	I1026 15:22:21.678716  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:21.682223  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682617  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.682640  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682859  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:21.683068  182377 machine.go:93] provisionDockerMachine start ...
	I1026 15:22:21.685439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.685814  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.685841  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.686028  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.686280  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.686297  182377 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:22:21.789433  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:22:21.789491  182377 buildroot.go:166] provisioning hostname "newest-cni-574718"
	I1026 15:22:21.792404  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.792911  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.792937  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.793176  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.793395  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.793410  182377 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574718 && echo "newest-cni-574718" | sudo tee /etc/hostname
	I1026 15:22:21.914128  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574718
	
	I1026 15:22:21.917275  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917738  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.917764  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917937  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.918176  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.918200  182377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574718' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574718/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574718' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:22:22.026151  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:22.026183  182377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:22:22.026217  182377 buildroot.go:174] setting up certificates
	I1026 15:22:22.026229  182377 provision.go:84] configureAuth start
	I1026 15:22:22.029052  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.029554  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.029582  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.031873  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032223  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.032249  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032371  182377 provision.go:143] copyHostCerts
	I1026 15:22:22.032450  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:22:22.032491  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:22:22.032577  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:22:22.032704  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:22:22.032719  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:22:22.032762  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:22:22.032845  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:22:22.032855  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:22:22.032893  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:22:22.032958  182377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574718 san=[127.0.0.1 192.168.61.33 localhost minikube newest-cni-574718]
	I1026 15:22:22.469944  182377 provision.go:177] copyRemoteCerts
	I1026 15:22:22.470018  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:22:22.472561  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.472948  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.472970  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.473117  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.554777  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:22:22.582124  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:22:22.610149  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:22:22.638169  182377 provision.go:87] duration metric: took 611.92185ms to configureAuth
	I1026 15:22:22.638199  182377 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:22:22.638398  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:22.641177  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641627  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.641657  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641842  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:22.642047  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:22.642063  182377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:22:22.906384  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:22:22.906420  182377 machine.go:96] duration metric: took 1.223336761s to provisionDockerMachine
	I1026 15:22:22.906434  182377 start.go:293] postStartSetup for "newest-cni-574718" (driver="kvm2")
	I1026 15:22:22.906449  182377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:22:22.906556  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:22:22.909934  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910412  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.910439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910638  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.992977  182377 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:22:22.997825  182377 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:22:22.997860  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:22:22.997933  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:22:22.998039  182377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:22:22.998136  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:22:23.009341  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:23.040890  182377 start.go:296] duration metric: took 134.438124ms for postStartSetup
	I1026 15:22:23.040950  182377 fix.go:56] duration metric: took 14.962237903s for fixHost
	I1026 15:22:23.044164  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044594  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.044630  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044933  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:23.045233  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:23.045254  182377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:22:23.147520  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761492143.098139468
	
	I1026 15:22:23.147547  182377 fix.go:216] guest clock: 1761492143.098139468
	I1026 15:22:23.147556  182377 fix.go:229] Guest: 2025-10-26 15:22:23.098139468 +0000 UTC Remote: 2025-10-26 15:22:23.04095679 +0000 UTC m=+15.073904102 (delta=57.182678ms)
	I1026 15:22:23.147581  182377 fix.go:200] guest clock delta is within tolerance: 57.182678ms
	I1026 15:22:23.147589  182377 start.go:83] releasing machines lock for "newest-cni-574718", held for 15.068897915s
	I1026 15:22:23.150728  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151142  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.151167  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151719  182377 ssh_runner.go:195] Run: cat /version.json
	I1026 15:22:23.151804  182377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:22:23.155059  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155294  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155561  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155595  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155739  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.155910  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155945  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.156130  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.231442  182377 ssh_runner.go:195] Run: systemctl --version
	I1026 15:22:23.263168  182377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:22:23.405941  182377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:22:23.412607  182377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:22:23.412693  182377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:22:23.431222  182377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:22:23.431247  182377 start.go:495] detecting cgroup driver to use...
	I1026 15:22:23.431329  182377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:22:23.449871  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:22:23.466135  182377 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:22:23.466207  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:22:23.483845  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:22:23.499194  182377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:22:23.646146  182377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:22:23.864499  182377 docker.go:234] disabling docker service ...
	I1026 15:22:23.864576  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:22:23.882304  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:22:23.897571  182377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:22:24.064966  182377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:22:24.201804  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:22:24.216914  182377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:22:24.239366  182377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:22:24.239426  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.251236  182377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:22:24.251318  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.263630  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.275134  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.287125  182377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:22:24.302136  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.315011  182377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.335688  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.347573  182377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:22:24.358181  182377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:22:24.358260  182377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:22:24.379177  182377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:22:24.391253  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:24.532080  182377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:22:24.652383  182377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:22:24.652516  182377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:22:24.658249  182377 start.go:563] Will wait 60s for crictl version
	I1026 15:22:24.658308  182377 ssh_runner.go:195] Run: which crictl
	I1026 15:22:24.662623  182377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:22:24.701747  182377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:22:24.701833  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.730381  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.761145  182377 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:22:24.764994  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765410  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:24.765433  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765621  182377 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 15:22:24.770397  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:24.787194  182377 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:22:24.788437  182377 kubeadm.go:883] updating cluster {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:22:24.788570  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:24.788622  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:24.828217  182377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:22:24.828316  182377 ssh_runner.go:195] Run: which lz4
	I1026 15:22:24.833073  182377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:22:24.838213  182377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:22:24.838246  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 15:22:26.232172  182377 crio.go:462] duration metric: took 1.399140151s to copy over tarball
	I1026 15:22:26.232290  182377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:22:28.031969  182377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.79963377s)
	I1026 15:22:28.032009  182377 crio.go:469] duration metric: took 1.799794706s to extract the tarball
	I1026 15:22:28.032019  182377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:22:28.083266  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:28.129231  182377 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:22:28.129262  182377 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:22:28.129271  182377 kubeadm.go:934] updating node { 192.168.61.33 8443 v1.34.1 crio true true} ...
	I1026 15:22:28.129386  182377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574718 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:22:28.129473  182377 ssh_runner.go:195] Run: crio config
	I1026 15:22:28.175414  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:28.175448  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:28.175493  182377 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:22:28.175532  182377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574718 NodeName:newest-cni-574718 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:22:28.175679  182377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574718"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:22:28.175746  182377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:22:28.189114  182377 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:22:28.189184  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:22:28.201285  182377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1026 15:22:28.222167  182377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:22:28.241882  182377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:22:28.262267  182377 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1026 15:22:28.266495  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:28.281183  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:28.445545  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:28.481631  182377 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718 for IP: 192.168.61.33
	I1026 15:22:28.481655  182377 certs.go:195] generating shared ca certs ...
	I1026 15:22:28.481672  182377 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:28.481853  182377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:22:28.481904  182377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:22:28.481916  182377 certs.go:257] generating profile certs ...
	I1026 15:22:28.482010  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/client.key
	I1026 15:22:28.482074  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key.59f77b64
	I1026 15:22:28.482115  182377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key
	I1026 15:22:28.482217  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:22:28.482254  182377 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:22:28.482262  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:22:28.482285  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:22:28.482316  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:22:28.482340  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:22:28.482379  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:28.483044  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:22:28.517526  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:22:28.558414  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:22:28.586297  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:22:28.613805  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:22:28.642929  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:22:28.671810  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:22:28.700191  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:22:28.729422  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:22:28.756494  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:22:28.783988  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:22:28.812588  182377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:22:28.832551  182377 ssh_runner.go:195] Run: openssl version
	I1026 15:22:28.838355  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:22:28.850638  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855574  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855636  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.862555  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:22:28.874412  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:22:28.886395  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891025  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891082  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.897923  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:22:28.910115  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:22:28.922622  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927296  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927337  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.934138  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:22:28.945693  182377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:22:28.950557  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:22:28.957416  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:22:28.964523  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:22:28.971586  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:22:28.978762  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:22:28.986053  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:22:28.993134  182377 kubeadm.go:400] StartCluster: {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:28.993263  182377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:22:28.993323  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.032028  182377 cri.go:89] found id: ""
	I1026 15:22:29.032103  182377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:22:29.043952  182377 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:22:29.043972  182377 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:22:29.044040  182377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:22:29.056289  182377 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:22:29.057119  182377 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574718" does not appear in /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:29.057648  182377 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-137233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574718" cluster setting kubeconfig missing "newest-cni-574718" context setting]
	I1026 15:22:29.058341  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:29.060135  182377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:22:29.070432  182377 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.61.33
	I1026 15:22:29.070477  182377 kubeadm.go:1160] stopping kube-system containers ...
	I1026 15:22:29.070498  182377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 15:22:29.070565  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.108499  182377 cri.go:89] found id: ""
	I1026 15:22:29.108625  182377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 15:22:29.128646  182377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:22:29.140200  182377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:22:29.140217  182377 kubeadm.go:157] found existing configuration files:
	
	I1026 15:22:29.140259  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:22:29.150547  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:22:29.150618  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:22:29.161551  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:22:29.171576  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:22:29.171637  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:22:29.182113  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.191928  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:22:29.191975  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.202335  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:22:29.212043  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:22:29.212089  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:22:29.222315  182377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:22:29.232961  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:29.285078  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:30.940058  182377 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.654938215s)
	I1026 15:22:30.940132  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.190262  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.246873  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.330409  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:31.330532  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:31.830602  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.330655  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.830666  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.330601  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.376334  182377 api_server.go:72] duration metric: took 2.045939712s to wait for apiserver process to appear ...
	I1026 15:22:33.376368  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:33.376393  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:33.377001  182377 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": dial tcp 192.168.61.33:8443: connect: connection refused
	I1026 15:22:33.876665  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.154624  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.154676  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.154695  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.184996  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.185030  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.377426  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.382349  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.382371  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:36.876548  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.881970  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.882006  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.376698  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.384123  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:37.384156  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.876774  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.882031  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:37.891824  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:37.891850  182377 api_server.go:131] duration metric: took 4.515475379s to wait for apiserver health ...
	I1026 15:22:37.891861  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:37.891868  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:37.893513  182377 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:22:37.894739  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:22:37.909012  182377 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:22:37.935970  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:37.941779  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:37.941822  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:37.941834  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:37.941848  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:37.941862  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:37.941873  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:22:37.941878  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:37.941884  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:37.941889  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:22:37.941897  182377 system_pods.go:74] duration metric: took 5.899576ms to wait for pod list to return data ...
	I1026 15:22:37.941906  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:37.946827  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:37.946868  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:37.946885  182377 node_conditions.go:105] duration metric: took 4.973356ms to run NodePressure ...
	I1026 15:22:37.946955  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:38.207008  182377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:22:38.236075  182377 ops.go:34] apiserver oom_adj: -16
	I1026 15:22:38.236107  182377 kubeadm.go:601] duration metric: took 9.192128682s to restartPrimaryControlPlane
	I1026 15:22:38.236126  182377 kubeadm.go:402] duration metric: took 9.243002383s to StartCluster
	I1026 15:22:38.236154  182377 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.236270  182377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:38.238433  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.238827  182377 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:22:38.238959  182377 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:22:38.239088  182377 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574718"
	I1026 15:22:38.239110  182377 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574718"
	W1026 15:22:38.239120  182377 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:22:38.239127  182377 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574718"
	I1026 15:22:38.239155  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239168  182377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574718"
	I1026 15:22:38.239190  182377 addons.go:69] Setting dashboard=true in profile "newest-cni-574718"
	I1026 15:22:38.239234  182377 addons.go:238] Setting addon dashboard=true in "newest-cni-574718"
	W1026 15:22:38.239252  182377 addons.go:247] addon dashboard should already be in state true
	I1026 15:22:38.239176  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:38.239296  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239172  182377 addons.go:69] Setting metrics-server=true in profile "newest-cni-574718"
	I1026 15:22:38.239373  182377 addons.go:238] Setting addon metrics-server=true in "newest-cni-574718"
	W1026 15:22:38.239384  182377 addons.go:247] addon metrics-server should already be in state true
	I1026 15:22:38.239411  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.240384  182377 out.go:179] * Verifying Kubernetes components...
	I1026 15:22:38.241817  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:38.243158  182377 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574718"
	W1026 15:22:38.243174  182377 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:22:38.243191  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.243431  182377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:22:38.243449  182377 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 15:22:38.243435  182377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:22:38.244547  182377 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.244562  182377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:22:38.244795  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 15:22:38.244828  182377 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 15:22:38.244850  182377 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.244868  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:22:38.245802  182377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:22:38.246890  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:22:38.246914  182377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:22:38.248534  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248638  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248957  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249338  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249373  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249432  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249474  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249621  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249648  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249665  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249857  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249989  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.250917  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251364  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.251395  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251570  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.548715  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:38.574744  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:38.574851  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:38.594161  182377 api_server.go:72] duration metric: took 355.284664ms to wait for apiserver process to appear ...
	I1026 15:22:38.594202  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:38.594226  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:38.599953  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:38.601088  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:38.601116  182377 api_server.go:131] duration metric: took 6.905101ms to wait for apiserver health ...
	I1026 15:22:38.601130  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:38.604838  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:38.604863  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:38.604872  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:38.604886  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:38.604917  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:38.604924  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running
	I1026 15:22:38.604930  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:38.604934  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:38.604940  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running
	I1026 15:22:38.604945  182377 system_pods.go:74] duration metric: took 3.809261ms to wait for pod list to return data ...
	I1026 15:22:38.604952  182377 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:38.607878  182377 default_sa.go:45] found service account: "default"
	I1026 15:22:38.607900  182377 default_sa.go:55] duration metric: took 2.941228ms for default service account to be created ...
	I1026 15:22:38.607913  182377 kubeadm.go:586] duration metric: took 369.045368ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:38.607930  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:38.610509  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:38.610524  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:38.610536  182377 node_conditions.go:105] duration metric: took 2.601775ms to run NodePressure ...
	I1026 15:22:38.610549  182377 start.go:241] waiting for startup goroutines ...
	I1026 15:22:38.736034  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.789628  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.810637  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:38.810662  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:38.831863  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:38.831893  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:38.877236  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:38.877280  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:38.881939  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:38.881971  182377 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:38.934545  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:38.934581  182377 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:38.950819  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:38.950852  182377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:38.995779  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:39.021057  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:39.021079  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:39.079563  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:39.079594  182377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:39.132351  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:39.132382  182377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:39.193426  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:39.193470  182377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:39.235471  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:39.235496  182377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:39.271746  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:39.271773  182377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:39.307718  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:40.193013  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.403339708s)
	I1026 15:22:40.408827  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.413001507s)
	I1026 15:22:40.408876  182377 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574718"
	I1026 15:22:40.667395  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359629965s)
	I1026 15:22:40.668723  182377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574718 addons enable metrics-server
	
	I1026 15:22:40.669858  182377 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:40.671055  182377 addons.go:514] duration metric: took 2.432108694s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:40.671096  182377 start.go:246] waiting for cluster config update ...
	I1026 15:22:40.671111  182377 start.go:255] writing updated cluster config ...
	I1026 15:22:40.671384  182377 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:40.721560  182377 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:40.722854  182377 out.go:179] * Done! kubectl is now configured to use "newest-cni-574718" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.827879768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493179827853413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7fbcb907-a15b-4a4b-9642-6eff0f9280ce name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.828675305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f725f3a0-a389-4725-93bf-59784f68a2b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.828835336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f725f3a0-a389-4725-93bf-59784f68a2b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.829282141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493079343398203,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=f725f3a0-a389-4725-93bf-59784f68a2b6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.869310793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c7789a5-6e29-481e-a684-e0d5ad1220a9 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.869388359Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c7789a5-6e29-481e-a684-e0d5ad1220a9 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.870289151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=741ad1d8-e116-439a-bc55-f2d58695c87b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.870804649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493179870782588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=741ad1d8-e116-439a-bc55-f2d58695c87b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.871482784Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8683b1d3-3acb-4db8-841d-a55340d1eb94 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.871706915Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8683b1d3-3acb-4db8-841d-a55340d1eb94 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.872163215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493079343398203,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=8683b1d3-3acb-4db8-841d-a55340d1eb94 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.907181648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fcbd933-ced2-46e2-a09f-ebe18c813884 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.907264112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fcbd933-ced2-46e2-a09f-ebe18c813884 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.908991326Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=391979b1-6d72-41bb-8b8b-30bb4305bac2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.909436749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493179909415996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=391979b1-6d72-41bb-8b8b-30bb4305bac2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.910042028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dfc9c17-d2f5-4cf2-a6d8-94bc779bf008 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.910110107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dfc9c17-d2f5-4cf2-a6d8-94bc779bf008 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.910333876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493079343398203,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=4dfc9c17-d2f5-4cf2-a6d8-94bc779bf008 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.952317625Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e06253e-bf53-4bbf-9c0d-189aa7bb0d5f name=/runtime.v1.RuntimeService/Version
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.952453221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e06253e-bf53-4bbf-9c0d-189aa7bb0d5f name=/runtime.v1.RuntimeService/Version
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.953630907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b3cdc1d-c9a7-45a3-881d-e725937d2da0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.954059322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493179954039836,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b3cdc1d-c9a7-45a3-881d-e725937d2da0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.954668141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97295558-8833-4fec-9f97-0054f8c7750a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.954832646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97295558-8833-4fec-9f97-0054f8c7750a name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:39:39 embed-certs-163393 crio[883]: time="2025-10-26 15:39:39.955387594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7,PodSandboxId:e7089f34827879322ba958ff1e2536aa5c9d06297bab033156e66e99663bb3f7,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493079343398203,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-rkfts,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: b0901c2e-4930-4c26-8f6a-c31d3d1f7aae,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.kubernetes.
container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492113576490027,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[
string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09b5bb7efde2f5ced955864aed9909154bb89d6fcf500dae7ba11a6910cebc3,PodSandboxId:c437ab570b9f00f262f7d23afc1c735a5eae3876f7eb08a4a28550c23610a9de,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492090253434173,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 10785d26-2fbc-4a19-ad15-fcc4d97a0f26,},Annotations:map[string]string{io.
kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7,PodSandboxId:6c7258bb82d045ac1b4e8b45077490989f048ef8a603f07e2501fc20c3ec8b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492086652179559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-hhhkv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f28546bb-2a20-49cb-a8a3-1aec076501ce,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf
792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef,PodSandboxId:d9aa561c19c3895c184e746f273f9c1ee35edd4b8757aeb6782784a76a119752,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1
a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492082818831081,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b46kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1da91a5c-34a5-4481-9924-5e7b32f33938,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb,PodSandboxId:09479676439ef9dd60aeec89dc053459d521e989e53f1235e33b584c59e0e735,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_EXITED,CreatedAt:1761492082796992011,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da1c32fd-9d15-473c-82ae-a38fb9c54941,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2,PodSandboxId:be3cb9da5a41ede066eff93e0a759c393e4746d90cd8466088b2f98f242644c6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:17614
92078024828829,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dea4f1ddb6ee22b7bdc45e2b5881aa7,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd,PodSandboxId:beb50059bdb483887fbb6f7d4e3c4af6c5a47cb7513f75433298365738f2e4f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7e
aae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761492077986609631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1b5894beda8f45b6889609ac990f43f,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051,PodSandboxId:78e82152150eb7059f06f215e26df1064ff3bf0a9856c055135b20c7eecf0c29,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f
3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492077957862685,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 435d0719dd29427691745ddf86f8f67d,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6,PodSandboxId:89b8bd3e0cbb7564d816e9a0f68c57f16741806f04587e39726aff849a
633a87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492077938185268,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-163393,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d63a29c76749b7d1af0fc04350a087,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go
:74" id=97295558-8833-4fec-9f97-0054f8c7750a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	bb1b1d3957fa9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      About a minute ago   Exited              dashboard-metrics-scraper   8                   e7089f3482787       dashboard-metrics-scraper-6ffb444bf9-rkfts
	0ad56b9c2cf9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago       Running             storage-provisioner         2                   09479676439ef       storage-provisioner
	b09b5bb7efde2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago       Running             busybox                     1                   c437ab570b9f0       busybox
	de85bea7b642a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago       Running             coredns                     1                   6c7258bb82d04       coredns-66bc5c9577-hhhkv
	897c8e09909af       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      18 minutes ago       Running             kube-proxy                  1                   d9aa561c19c38       kube-proxy-b46kz
	59611ca5e91cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago       Exited              storage-provisioner         1                   09479676439ef       storage-provisioner
	bea228f4e31fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago       Running             etcd                        1                   be3cb9da5a41e       etcd-embed-certs-163393
	2a03b2dcad177       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      18 minutes ago       Running             kube-scheduler              1                   beb50059bdb48       kube-scheduler-embed-certs-163393
	76d2ff2c19e4d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      18 minutes ago       Running             kube-controller-manager     1                   78e82152150eb       kube-controller-manager-embed-certs-163393
	972f01a2f188c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      18 minutes ago       Running             kube-apiserver              1                   89b8bd3e0cbb7       kube-apiserver-embed-certs-163393
	
	
	==> coredns [de85bea7b642a6aa6c22b2beb3b7267bf31a7ed44b65d2d0423348f52cd50ec7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57956 - 22912 "HINFO IN 5038473847254140814.7620950374588468031. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029880705s
	
	
	==> describe nodes <==
	Name:               embed-certs-163393
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-163393
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=embed-certs-163393
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_18_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:18:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-163393
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:39:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:37:20 +0000   Sun, 26 Oct 2025 15:18:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:37:20 +0000   Sun, 26 Oct 2025 15:18:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:37:20 +0000   Sun, 26 Oct 2025 15:18:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:37:20 +0000   Sun, 26 Oct 2025 15:21:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    embed-certs-163393
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042708Ki
	  pods:               110
	System Info:
	  Machine ID:                 31c0eca226a441c9a6dfd975a508de47
	  System UUID:                31c0eca2-26a4-41c9-a6df-d975a508de47
	  Boot ID:                    85e8c752-cce9-4b70-b7d5-1ff1562ab03c
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-hhhkv                      100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-embed-certs-163393                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-163393             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-163393    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-b46kz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-163393             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-frdcx               100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-rkfts    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-nxc8p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node embed-certs-163393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-163393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node embed-certs-163393 status is now: NodeHasSufficientPID
	  Normal   NodeReady                21m                kubelet          Node embed-certs-163393 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node embed-certs-163393 event: Registered Node embed-certs-163393 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-163393 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-163393 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-163393 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node embed-certs-163393 has been rebooted, boot id: 85e8c752-cce9-4b70-b7d5-1ff1562ab03c
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-163393 event: Registered Node embed-certs-163393 in Controller
	
	
	==> dmesg <==
	[Oct26 15:20] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[Oct26 15:21] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001892] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.843151] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.129076] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.093708] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.581132] kauditd_printk_skb: 168 callbacks suppressed
	[  +3.754830] kauditd_printk_skb: 347 callbacks suppressed
	[  +0.036096] kauditd_printk_skb: 11 callbacks suppressed
	[Oct26 15:22] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.857673] kauditd_printk_skb: 32 callbacks suppressed
	[ +18.721574] kauditd_printk_skb: 13 callbacks suppressed
	[ +23.987667] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:23] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:25] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:27] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:32] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:37] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [bea228f4e31fb15c8139ec0487d813c281472f0dfcd575e4f44c00f985baead2] <==
	{"level":"warn","ts":"2025-10-26T15:21:20.575413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:21:20.636472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T15:21:24.998226Z","caller":"traceutil/trace.go:172","msg":"trace[523423115] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"109.682013ms","start":"2025-10-26T15:21:24.888525Z","end":"2025-10-26T15:21:24.998206Z","steps":["trace[523423115] 'process raft request'  (duration: 108.120764ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:21:34.218966Z","caller":"traceutil/trace.go:172","msg":"trace[1663933131] transaction","detail":"{read_only:false; response_revision:666; number_of_response:1; }","duration":"184.334714ms","start":"2025-10-26T15:21:34.034608Z","end":"2025-10-26T15:21:34.218943Z","steps":["trace[1663933131] 'process raft request'  (duration: 183.802487ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T15:21:34.357615Z","caller":"traceutil/trace.go:172","msg":"trace[648416822] linearizableReadLoop","detail":"{readStateIndex:713; appliedIndex:713; }","duration":"116.342744ms","start":"2025-10-26T15:21:34.241227Z","end":"2025-10-26T15:21:34.357570Z","steps":["trace[648416822] 'read index received'  (duration: 116.291615ms)","trace[648416822] 'applied index is now lower than readState.Index'  (duration: 49.88µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:21:34.656118Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"414.869359ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-163393\" limit:1 ","response":"range_response_count:1 size:7049"}
	{"level":"info","ts":"2025-10-26T15:21:34.657049Z","caller":"traceutil/trace.go:172","msg":"trace[41570780] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-163393; range_end:; response_count:1; response_revision:666; }","duration":"415.813974ms","start":"2025-10-26T15:21:34.241223Z","end":"2025-10-26T15:21:34.657037Z","steps":["trace[41570780] 'agreement among raft nodes before linearized reading'  (duration: 116.466579ms)","trace[41570780] 'range keys from in-memory index tree'  (duration: 298.298467ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:21:34.657086Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.241201Z","time spent":"415.873715ms","remote":"127.0.0.1:36434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":73,"response count":1,"response size":7072,"request content":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-163393\" limit:1 "}
	{"level":"warn","ts":"2025-10-26T15:21:34.656844Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"298.712389ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16244090372967315732 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:652 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:835 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T15:21:34.657721Z","caller":"traceutil/trace.go:172","msg":"trace[106216093] linearizableReadLoop","detail":"{readStateIndex:714; appliedIndex:713; }","duration":"163.713335ms","start":"2025-10-26T15:21:34.493992Z","end":"2025-10-26T15:21:34.657705Z","steps":["trace[106216093] 'read index received'  (duration: 161.942817ms)","trace[106216093] 'applied index is now lower than readState.Index'  (duration: 1.768258ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T15:21:34.657882Z","caller":"traceutil/trace.go:172","msg":"trace[471236467] transaction","detail":"{read_only:false; response_revision:669; number_of_response:1; }","duration":"425.24744ms","start":"2025-10-26T15:21:34.232625Z","end":"2025-10-26T15:21:34.657872Z","steps":["trace[471236467] 'process raft request'  (duration: 424.683397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:21:34.657915Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.928333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-hhhkv\" limit:1 ","response":"range_response_count:1 size:5458"}
	{"level":"info","ts":"2025-10-26T15:21:34.657978Z","caller":"traceutil/trace.go:172","msg":"trace[460899596] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-hhhkv; range_end:; response_count:1; response_revision:669; }","duration":"163.993817ms","start":"2025-10-26T15:21:34.493973Z","end":"2025-10-26T15:21:34.657967Z","steps":["trace[460899596] 'agreement among raft nodes before linearized reading'  (duration: 163.83791ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:21:34.658090Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.232608Z","time spent":"425.327628ms","remote":"127.0.0.1:36994","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4134,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:577 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4074 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"info","ts":"2025-10-26T15:21:34.658297Z","caller":"traceutil/trace.go:172","msg":"trace[1999106465] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"431.065002ms","start":"2025-10-26T15:21:34.227224Z","end":"2025-10-26T15:21:34.658289Z","steps":["trace[1999106465] 'process raft request'  (duration: 130.517356ms)","trace[1999106465] 'compare'  (duration: 298.563862ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T15:21:34.658447Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.227207Z","time spent":"431.215255ms","remote":"127.0.0.1:36386","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":892,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/kube-dns\" mod_revision:652 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/kube-dns\" value_size:835 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/kube-dns\" > >"}
	{"level":"info","ts":"2025-10-26T15:21:34.658774Z","caller":"traceutil/trace.go:172","msg":"trace[1167441621] transaction","detail":"{read_only:false; response_revision:668; number_of_response:1; }","duration":"430.518241ms","start":"2025-10-26T15:21:34.228250Z","end":"2025-10-26T15:21:34.658768Z","steps":["trace[1167441621] 'process raft request'  (duration: 429.016312ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T15:21:34.659038Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T15:21:34.228236Z","time spent":"430.779375ms","remote":"127.0.0.1:36600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1259,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-f9l6q\" mod_revision:651 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-f9l6q\" value_size:1200 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-f9l6q\" > >"}
	{"level":"warn","ts":"2025-10-26T15:21:58.097300Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.184259ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16244090372967315873 > lease_revoke:<id:616e9a211c0f3952>","response":"size:28"}
	{"level":"info","ts":"2025-10-26T15:31:19.271908Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2025-10-26T15:31:19.294306Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1031,"took":"22.047005ms","hash":1865775486,"current-db-size-bytes":3198976,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1282048,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-26T15:31:19.294386Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1865775486,"revision":1031,"compact-revision":-1}
	{"level":"info","ts":"2025-10-26T15:36:19.281080Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1313}
	{"level":"info","ts":"2025-10-26T15:36:19.286658Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1313,"took":"5.363251ms","hash":1500309795,"current-db-size-bytes":3198976,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1806336,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-10-26T15:36:19.286705Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1500309795,"revision":1313,"compact-revision":1031}
	
	
	==> kernel <==
	 15:39:40 up 18 min,  0 users,  load average: 0.16, 0.12, 0.09
	Linux embed-certs-163393 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [972f01a2f188c908249e798cb10559ac92e4c37359f37477fb3fc289799cd3d6] <==
	I1026 15:36:22.378000       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:36:22.378065       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:36:22.378244       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:36:22.379339       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:37:22.378162       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:37:22.378210       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:37:22.378222       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:37:22.380385       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:37:22.380424       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:37:22.380432       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:39:22.378994       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:39:22.379054       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:39:22.379071       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:39:22.381352       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:39:22.381462       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:39:22.381473       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [76d2ff2c19e4db472b9824939eb750d2f0af9f398a3f0d88af735c5cf7208051] <==
	I1026 15:33:25.301719       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:33:55.145887       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:33:55.309874       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:34:25.150729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:34:25.316624       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:34:55.155469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:34:55.324488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:35:25.160120       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:35:25.333479       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:35:55.164264       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:35:55.341884       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:36:25.169245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:36:25.350360       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:36:55.175410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:36:55.359489       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:37:25.180343       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:37:25.368137       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:37:55.184206       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:37:55.376173       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:38:25.188487       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:38:25.381913       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:38:55.192891       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:38:55.389012       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:39:25.197094       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:39:25.396783       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [897c8e09909afc69d8e2da66af0507d5028b8bdf02f16a7b0a79d15818e54fef] <==
	I1026 15:21:23.159825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:21:23.261290       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:21:23.261724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.103"]
	E1026 15:21:23.261858       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:21:23.444002       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 15:21:23.444180       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:21:23.444370       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:21:23.457163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:21:23.457608       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:21:23.457693       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:21:23.467906       1 config.go:200] "Starting service config controller"
	I1026 15:21:23.467933       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:21:23.467951       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:21:23.467955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:21:23.467964       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:21:23.467967       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:21:23.470600       1 config.go:309] "Starting node config controller"
	I1026 15:21:23.470622       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:21:23.570416       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 15:21:23.570480       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:21:23.584868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:21:23.584935       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2a03b2dcad1775d9dea5e8114a4a9b9ac006228bc912988ea7b070811193dcdd] <==
	I1026 15:21:19.520346       1 serving.go:386] Generated self-signed cert in-memory
	W1026 15:21:21.296244       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 15:21:21.296360       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 15:21:21.296544       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 15:21:21.296619       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 15:21:21.401191       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 15:21:21.401233       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:21:21.407480       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:21:21.407517       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:21:21.409992       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:21:21.410114       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 15:21:21.508726       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:38:52 embed-certs-163393 kubelet[1214]: E1026 15:38:52.328687    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:38:56 embed-certs-163393 kubelet[1214]: E1026 15:38:56.330695    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	Oct 26 15:38:56 embed-certs-163393 kubelet[1214]: E1026 15:38:56.595505    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493136595150415  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:38:56 embed-certs-163393 kubelet[1214]: E1026 15:38:56.595573    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493136595150415  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:38:59 embed-certs-163393 kubelet[1214]: E1026 15:38:59.329084    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:39:06 embed-certs-163393 kubelet[1214]: I1026 15:39:06.329689    1214 scope.go:117] "RemoveContainer" containerID="bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7"
	Oct 26 15:39:06 embed-certs-163393 kubelet[1214]: E1026 15:39:06.329821    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:39:06 embed-certs-163393 kubelet[1214]: E1026 15:39:06.597347    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493146596490601  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:06 embed-certs-163393 kubelet[1214]: E1026 15:39:06.597402    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493146596490601  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:11 embed-certs-163393 kubelet[1214]: E1026 15:39:11.329767    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	Oct 26 15:39:13 embed-certs-163393 kubelet[1214]: E1026 15:39:13.330226    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:39:16 embed-certs-163393 kubelet[1214]: E1026 15:39:16.599124    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493156598598033  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:16 embed-certs-163393 kubelet[1214]: E1026 15:39:16.599161    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493156598598033  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:20 embed-certs-163393 kubelet[1214]: I1026 15:39:20.328990    1214 scope.go:117] "RemoveContainer" containerID="bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7"
	Oct 26 15:39:20 embed-certs-163393 kubelet[1214]: E1026 15:39:20.329166    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:39:25 embed-certs-163393 kubelet[1214]: E1026 15:39:25.330013    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:39:25 embed-certs-163393 kubelet[1214]: E1026 15:39:25.330187    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	Oct 26 15:39:26 embed-certs-163393 kubelet[1214]: E1026 15:39:26.600262    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493166600018881  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:26 embed-certs-163393 kubelet[1214]: E1026 15:39:26.600285    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493166600018881  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:35 embed-certs-163393 kubelet[1214]: I1026 15:39:35.328962    1214 scope.go:117] "RemoveContainer" containerID="bb1b1d3957fa97c26a33fce4f44de2a61c6af2f7ed79c3f5e4f9f3fcf1ec2ff7"
	Oct 26 15:39:35 embed-certs-163393 kubelet[1214]: E1026 15:39:35.329152    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-rkfts_kubernetes-dashboard(b0901c2e-4930-4c26-8f6a-c31d3d1f7aae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-rkfts" podUID="b0901c2e-4930-4c26-8f6a-c31d3d1f7aae"
	Oct 26 15:39:36 embed-certs-163393 kubelet[1214]: E1026 15:39:36.330797    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-nxc8p" podUID="ee5a7e88-da7c-4c3b-bae0-abbaf5ff76bc"
	Oct 26 15:39:36 embed-certs-163393 kubelet[1214]: E1026 15:39:36.602776    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493176601772111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:36 embed-certs-163393 kubelet[1214]: E1026 15:39:36.602802    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493176601772111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:38 embed-certs-163393 kubelet[1214]: E1026 15:39:38.330826    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-frdcx" podUID="13465c12-1bb9-42c2-922e-695a3e2387b6"
	
	
	==> storage-provisioner [0ad56b9c2cf9dd8ea77e1aad3e8684261500554f9d30b5d5fe6e7eeb6776b3c0] <==
	W1026 15:39:15.993970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:17.997178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:18.002416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:20.005616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:20.013263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:22.016635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:22.021451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:24.024940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:24.032579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:26.036141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:26.040743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:28.043958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:28.048214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:30.052184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:30.060163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:32.063570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:32.068109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:34.070932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:34.075766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:36.079394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:36.083900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:38.086134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:38.093999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:40.097735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:39:40.104430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [59611ca5e91cd083ff2568c97bef97d8f4740ecdf4e53381df7545cfa9e482fb] <==
	I1026 15:21:22.986162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:21:52.997101       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-163393 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-163393 describe pod metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-163393 describe pod metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p: exit status 1 (63.910607ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-frdcx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-nxc8p" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-163393 describe pod metrics-server-746fcd58dc-frdcx kubernetes-dashboard-855c9754f9-nxc8p: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c8wqg" [cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 15:31:29.934249  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:31:40.876109  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:32:24.824657  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:33:20.548564  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/old-k8s-version-065983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:33:43.650645  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/no-preload-758002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:33:54.587159  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:33:55.618301  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:34:15.735069  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:34:42.686192  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:35:17.651749  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:35:35.254826  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:35:38.799068  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:36:05.750712  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:36:18.564310  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:36:29.933677  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:36:40.875584  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:36:58.318341  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:37:24.824217  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:37:41.628113  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:37:53.001203  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:38:03.950123  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:38:20.548281  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/old-k8s-version-065983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:38:43.651028  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/no-preload-758002/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:38:47.892354  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/bridge-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:38:54.587579  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:38:55.618505  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:39:15.735022  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-10-26 15:40:23.143005422 +0000 UTC m=+5129.736436862
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 describe po kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-705037 describe po kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-c8wqg
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-705037/192.168.72.253
Start Time:       Sun, 26 Oct 2025 15:22:08 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sjgzl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-sjgzl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg to default-k8s-diff-port-705037
Warning  Failed     15m                   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     12m (x4 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     12m (x5 over 17m)     kubelet            Error: ErrImagePull
Normal   BackOff    2m49s (x46 over 17m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     2m34s (x47 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 logs kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-705037 logs kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard: exit status 1 (75.616067ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-c8wqg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-705037 logs kubernetes-dashboard-855c9754f9-c8wqg -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-705037 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-705037 logs -n 25: (1.115151214s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────
─────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────
─────┤
	│ unpause │ -p old-k8s-version-065983 --alsologtostderr -v=1                                                                                                                                                                                            │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:20 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:20 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p old-k8s-version-065983                                                                                                                                                                                                                   │ old-k8s-version-065983       │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-705037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                     │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ start   │ -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-705037 │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ no-preload-758002 image list --format=json                                                                                                                                                                                                  │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ pause   │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ unpause │ -p no-preload-758002 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ delete  │ -p no-preload-758002                                                                                                                                                                                                                        │ no-preload-758002            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-574718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:21 UTC │
	│ stop    │ -p newest-cni-574718 --alsologtostderr -v=3                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:21 UTC │ 26 Oct 25 15:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-574718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ start   │ -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ newest-cni-574718 image list --format=json                                                                                                                                                                                                  │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ pause   │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ unpause │ -p newest-cni-574718 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ delete  │ -p newest-cni-574718                                                                                                                                                                                                                        │ newest-cni-574718            │ jenkins │ v1.37.0 │ 26 Oct 25 15:22 UTC │ 26 Oct 25 15:22 UTC │
	│ image   │ embed-certs-163393 image list --format=json                                                                                                                                                                                                 │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:39 UTC │ 26 Oct 25 15:39 UTC │
	│ pause   │ -p embed-certs-163393 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:39 UTC │ 26 Oct 25 15:39 UTC │
	│ unpause │ -p embed-certs-163393 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:39 UTC │ 26 Oct 25 15:39 UTC │
	│ delete  │ -p embed-certs-163393                                                                                                                                                                                                                       │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:39 UTC │ 26 Oct 25 15:39 UTC │
	│ delete  │ -p embed-certs-163393                                                                                                                                                                                                                       │ embed-certs-163393           │ jenkins │ v1.37.0 │ 26 Oct 25 15:39 UTC │ 26 Oct 25 15:39 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────
─────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 15:22:08
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 15:22:08.024156  182377 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:22:08.024392  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024406  182377 out.go:374] Setting ErrFile to fd 2...
	I1026 15:22:08.024410  182377 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:22:08.024606  182377 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:22:08.025048  182377 out.go:368] Setting JSON to false
	I1026 15:22:08.025981  182377 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7462,"bootTime":1761484666,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:22:08.026077  182377 start.go:141] virtualization: kvm guest
	I1026 15:22:08.027688  182377 out.go:179] * [newest-cni-574718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:22:08.028960  182377 notify.go:220] Checking for updates...
	I1026 15:22:08.028993  182377 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:22:08.030046  182377 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:22:08.031185  182377 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:08.032356  182377 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:22:08.033461  182377 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:22:08.034474  182377 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:22:08.035832  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:08.036313  182377 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:22:08.072389  182377 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 15:22:08.073663  182377 start.go:305] selected driver: kvm2
	I1026 15:22:08.073682  182377 start.go:925] validating driver "kvm2" against &{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s S
cheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.073825  182377 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:22:08.075175  182377 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:08.075218  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:08.075284  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:08.075345  182377 start.go:349] cluster config:
	{Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:08.075449  182377 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 15:22:08.077008  182377 out.go:179] * Starting "newest-cni-574718" primary control-plane node in "newest-cni-574718" cluster
	I1026 15:22:08.078030  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:08.078073  182377 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 15:22:08.078088  182377 cache.go:58] Caching tarball of preloaded images
	I1026 15:22:08.078221  182377 preload.go:233] Found /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 15:22:08.078236  182377 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 15:22:08.078334  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:08.078601  182377 start.go:360] acquireMachinesLock for newest-cni-574718: {Name:mka0e861669c2f6d38861d0614c7d3b8dd89392c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1026 15:22:08.078675  182377 start.go:364] duration metric: took 45.376µs to acquireMachinesLock for "newest-cni-574718"
	I1026 15:22:08.078701  182377 start.go:96] Skipping create...Using existing machine configuration
	I1026 15:22:08.078711  182377 fix.go:54] fixHost starting: 
	I1026 15:22:08.080626  182377 fix.go:112] recreateIfNeeded on newest-cni-574718: state=Stopped err=<nil>
	W1026 15:22:08.080669  182377 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 15:22:06.333558  181858 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:06.357436  181858 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360857  181858 node_ready.go:49] node "default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:06.360901  181858 node_ready.go:38] duration metric: took 3.362736ms for node "default-k8s-diff-port-705037" to be "Ready" ...
	I1026 15:22:06.360919  181858 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:06.360981  181858 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:06.385860  181858 api_server.go:72] duration metric: took 266.62216ms to wait for apiserver process to appear ...
	I1026 15:22:06.385897  181858 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:06.385937  181858 api_server.go:253] Checking apiserver healthz at https://192.168.72.253:8444/healthz ...
	I1026 15:22:06.392647  181858 api_server.go:279] https://192.168.72.253:8444/healthz returned 200:
	ok
	I1026 15:22:06.393766  181858 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:06.393803  181858 api_server.go:131] duration metric: took 7.895398ms to wait for apiserver health ...
	I1026 15:22:06.393816  181858 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:06.397637  181858 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:06.397674  181858 system_pods.go:61] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.397686  181858 system_pods.go:61] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.397698  181858 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.397709  181858 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.397718  181858 system_pods.go:61] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.397728  181858 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.397746  181858 system_pods.go:61] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.397756  181858 system_pods.go:61] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.397766  181858 system_pods.go:74] duration metric: took 3.941599ms to wait for pod list to return data ...
	I1026 15:22:06.397779  181858 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:06.403865  181858 default_sa.go:45] found service account: "default"
	I1026 15:22:06.403888  181858 default_sa.go:55] duration metric: took 6.102699ms for default service account to be created ...
	I1026 15:22:06.403898  181858 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 15:22:06.408267  181858 system_pods.go:86] 8 kube-system pods found
	I1026 15:22:06.408305  181858 system_pods.go:89] "coredns-66bc5c9577-fs558" [35c18482-b39d-4e3f-aafd-51642938f5b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:06.408318  181858 system_pods.go:89] "etcd-default-k8s-diff-port-705037" [8f9b42db-0213-4e05-b438-59d38eab399b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:06.408330  181858 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-705037" [b8aa7de2-f2f9-447e-83a4-ce4eed131bf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:06.408339  181858 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-705037" [48a3f44e-dfb0-46cb-969f-cf88e075e662] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:06.408345  181858 system_pods.go:89] "kube-proxy-kr5kl" [7598b50f-deee-406f-86fc-1f57c2de4887] Running
	I1026 15:22:06.408354  181858 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-705037" [130cd574-dab4-4029-9fa0-47959d8b0eac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:06.408361  181858 system_pods.go:89] "metrics-server-746fcd58dc-nsvb5" [28c11adc-3f4d-46bc-abc5-f9b466e2ca10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:06.408373  181858 system_pods.go:89] "storage-provisioner" [974398e3-6fd7-44da-9ec6-a726c71c9e43] Running
	I1026 15:22:06.408383  181858 system_pods.go:126] duration metric: took 4.477868ms to wait for k8s-apps to be running ...
	I1026 15:22:06.408393  181858 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 15:22:06.408450  181858 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 15:22:06.432635  181858 system_svc.go:56] duration metric: took 24.227246ms WaitForService to wait for kubelet
	I1026 15:22:06.432676  181858 kubeadm.go:586] duration metric: took 313.448447ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 15:22:06.432702  181858 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:06.435956  181858 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:06.435988  181858 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:06.436002  181858 node_conditions.go:105] duration metric: took 3.294076ms to run NodePressure ...
	I1026 15:22:06.436018  181858 start.go:241] waiting for startup goroutines ...
	I1026 15:22:06.515065  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:06.572989  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:06.584697  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:06.584737  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:06.595077  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:06.595106  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:06.638704  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:06.638736  181858 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:06.659544  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:06.659582  181858 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:06.702281  181858 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.702320  181858 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:06.711972  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:06.712006  181858 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:06.757866  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:06.788030  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:06.788064  181858 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:06.847661  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:06.847708  181858 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:06.929153  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:06.929177  181858 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:06.986412  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:06.986448  181858 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:07.045193  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:07.045218  181858 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:07.093617  181858 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:07.093654  181858 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:07.162711  181858 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:08.298101  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.725070201s)
	I1026 15:22:08.369209  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.61128174s)
	I1026 15:22:08.369257  181858 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-705037"
	I1026 15:22:08.605124  181858 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.442357492s)
	I1026 15:22:08.606598  181858 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-705037 addons enable metrics-server
	
	I1026 15:22:08.607892  181858 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:08.609005  181858 addons.go:514] duration metric: took 2.489743866s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:08.609043  181858 start.go:246] waiting for cluster config update ...
	I1026 15:22:08.609058  181858 start.go:255] writing updated cluster config ...
	I1026 15:22:08.609345  181858 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:08.616260  181858 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:08.620760  181858 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:10.628668  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:08.082049  182377 out.go:252] * Restarting existing kvm2 VM for "newest-cni-574718" ...
	I1026 15:22:08.082089  182377 main.go:141] libmachine: starting domain...
	I1026 15:22:08.082102  182377 main.go:141] libmachine: ensuring networks are active...
	I1026 15:22:08.083029  182377 main.go:141] libmachine: Ensuring network default is active
	I1026 15:22:08.083543  182377 main.go:141] libmachine: Ensuring network mk-newest-cni-574718 is active
	I1026 15:22:08.084108  182377 main.go:141] libmachine: getting domain XML...
	I1026 15:22:08.085257  182377 main.go:141] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-574718</name>
	  <uuid>3e8359f9-dc38-4472-b6d3-ffe603a5ee64</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/newest-cni-574718.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7b:b5:97'/>
	      <source network='mk-newest-cni-574718'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a1:2e:d8'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1026 15:22:09.396910  182377 main.go:141] libmachine: waiting for domain to start...
	I1026 15:22:09.398416  182377 main.go:141] libmachine: domain is now running
	I1026 15:22:09.398445  182377 main.go:141] libmachine: waiting for IP...
	I1026 15:22:09.399448  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400230  182377 main.go:141] libmachine: domain newest-cni-574718 has current primary IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.400244  182377 main.go:141] libmachine: found domain IP: 192.168.61.33
	I1026 15:22:09.400250  182377 main.go:141] libmachine: reserving static IP address...
	I1026 15:22:09.400772  182377 main.go:141] libmachine: found host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.400809  182377 main.go:141] libmachine: skip adding static IP to network mk-newest-cni-574718 - found existing host DHCP lease matching {name: "newest-cni-574718", mac: "52:54:00:7b:b5:97", ip: "192.168.61.33"}
	I1026 15:22:09.400837  182377 main.go:141] libmachine: reserved static IP address 192.168.61.33 for domain newest-cni-574718
	I1026 15:22:09.400849  182377 main.go:141] libmachine: waiting for SSH...
	I1026 15:22:09.400857  182377 main.go:141] libmachine: Getting to WaitForSSH function...
	I1026 15:22:09.403391  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.403822  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:21:24 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:09.403850  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:09.404075  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:09.404289  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:09.404299  182377 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1026 15:22:12.493681  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	W1026 15:22:12.635327  181858 pod_ready.go:104] pod "coredns-66bc5c9577-fs558" is not "Ready", error: <nil>
	I1026 15:22:14.627621  181858 pod_ready.go:94] pod "coredns-66bc5c9577-fs558" is "Ready"
	I1026 15:22:14.627655  181858 pod_ready.go:86] duration metric: took 6.00687198s for pod "coredns-66bc5c9577-fs558" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.630599  181858 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.634975  181858 pod_ready.go:94] pod "etcd-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:14.635007  181858 pod_ready.go:86] duration metric: took 4.382539ms for pod "etcd-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:14.637185  181858 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 15:22:16.644581  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	W1026 15:22:19.144809  181858 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-705037" is not "Ready", error: <nil>
	I1026 15:22:20.143611  181858 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.143640  181858 pod_ready.go:86] duration metric: took 5.506432171s for pod "kube-apiserver-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.145536  181858 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.149100  181858 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.149131  181858 pod_ready.go:86] duration metric: took 3.572718ms for pod "kube-controller-manager-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.151047  181858 pod_ready.go:83] waiting for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.155496  181858 pod_ready.go:94] pod "kube-proxy-kr5kl" is "Ready"
	I1026 15:22:20.155521  181858 pod_ready.go:86] duration metric: took 4.452008ms for pod "kube-proxy-kr5kl" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.157137  181858 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424601  181858 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-705037" is "Ready"
	I1026 15:22:20.424645  181858 pod_ready.go:86] duration metric: took 267.484691ms for pod "kube-scheduler-default-k8s-diff-port-705037" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 15:22:20.424664  181858 pod_ready.go:40] duration metric: took 11.808360636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 15:22:20.472398  181858 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:20.474272  181858 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-705037" cluster and "default" namespace by default
	I1026 15:22:18.573877  182377 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.33:22: connect: no route to host
	I1026 15:22:21.678716  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:21.682223  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682617  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.682640  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.682859  182377 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/config.json ...
	I1026 15:22:21.683068  182377 machine.go:93] provisionDockerMachine start ...
	I1026 15:22:21.685439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.685814  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.685841  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.686028  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.686280  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.686297  182377 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 15:22:21.789433  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1026 15:22:21.789491  182377 buildroot.go:166] provisioning hostname "newest-cni-574718"
	I1026 15:22:21.792404  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.792911  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.792937  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.793176  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.793395  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.793410  182377 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-574718 && echo "newest-cni-574718" | sudo tee /etc/hostname
	I1026 15:22:21.914128  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-574718
	
	I1026 15:22:21.917275  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917738  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:21.917764  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:21.917937  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:21.918176  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:21.918200  182377 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-574718' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-574718/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-574718' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 15:22:22.026151  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 15:22:22.026183  182377 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21664-137233/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-137233/.minikube}
	I1026 15:22:22.026217  182377 buildroot.go:174] setting up certificates
	I1026 15:22:22.026229  182377 provision.go:84] configureAuth start
	I1026 15:22:22.029052  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.029554  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.029582  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.031873  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032223  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.032249  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.032371  182377 provision.go:143] copyHostCerts
	I1026 15:22:22.032450  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem, removing ...
	I1026 15:22:22.032491  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem
	I1026 15:22:22.032577  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/ca.pem (1082 bytes)
	I1026 15:22:22.032704  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem, removing ...
	I1026 15:22:22.032719  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem
	I1026 15:22:22.032762  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/cert.pem (1123 bytes)
	I1026 15:22:22.032845  182377 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem, removing ...
	I1026 15:22:22.032855  182377 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem
	I1026 15:22:22.032893  182377 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-137233/.minikube/key.pem (1675 bytes)
	I1026 15:22:22.032958  182377 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem org=jenkins.newest-cni-574718 san=[127.0.0.1 192.168.61.33 localhost minikube newest-cni-574718]
	I1026 15:22:22.469944  182377 provision.go:177] copyRemoteCerts
	I1026 15:22:22.470018  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 15:22:22.472561  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.472948  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.472970  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.473117  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.554777  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1026 15:22:22.582124  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 15:22:22.610149  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 15:22:22.638169  182377 provision.go:87] duration metric: took 611.92185ms to configureAuth
	I1026 15:22:22.638199  182377 buildroot.go:189] setting minikube options for container-runtime
	I1026 15:22:22.638398  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:22.641177  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641627  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.641657  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.641842  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:22.642047  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:22.642063  182377 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 15:22:22.906384  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 15:22:22.906420  182377 machine.go:96] duration metric: took 1.223336761s to provisionDockerMachine
	I1026 15:22:22.906434  182377 start.go:293] postStartSetup for "newest-cni-574718" (driver="kvm2")
	I1026 15:22:22.906449  182377 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 15:22:22.906556  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 15:22:22.909934  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910412  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:22.910439  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:22.910638  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:22.992977  182377 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 15:22:22.997825  182377 info.go:137] Remote host: Buildroot 2025.02
	I1026 15:22:22.997860  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/addons for local assets ...
	I1026 15:22:22.997933  182377 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-137233/.minikube/files for local assets ...
	I1026 15:22:22.998039  182377 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem -> 1412332.pem in /etc/ssl/certs
	I1026 15:22:22.998136  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 15:22:23.009341  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:23.040890  182377 start.go:296] duration metric: took 134.438124ms for postStartSetup
	I1026 15:22:23.040950  182377 fix.go:56] duration metric: took 14.962237903s for fixHost
	I1026 15:22:23.044164  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044594  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.044630  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.044933  182377 main.go:141] libmachine: Using SSH client type: native
	I1026 15:22:23.045233  182377 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841900] 0x844600 <nil>  [] 0s} 192.168.61.33 22 <nil> <nil>}
	I1026 15:22:23.045254  182377 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1026 15:22:23.147520  182377 main.go:141] libmachine: SSH cmd err, output: <nil>: 1761492143.098139468
	
	I1026 15:22:23.147547  182377 fix.go:216] guest clock: 1761492143.098139468
	I1026 15:22:23.147556  182377 fix.go:229] Guest: 2025-10-26 15:22:23.098139468 +0000 UTC Remote: 2025-10-26 15:22:23.04095679 +0000 UTC m=+15.073904102 (delta=57.182678ms)
	I1026 15:22:23.147581  182377 fix.go:200] guest clock delta is within tolerance: 57.182678ms
	I1026 15:22:23.147589  182377 start.go:83] releasing machines lock for "newest-cni-574718", held for 15.068897915s
	I1026 15:22:23.150728  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151142  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.151167  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.151719  182377 ssh_runner.go:195] Run: cat /version.json
	I1026 15:22:23.151804  182377 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 15:22:23.155059  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155294  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155561  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155595  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.155739  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.155910  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:23.155945  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:23.156130  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:23.231442  182377 ssh_runner.go:195] Run: systemctl --version
	I1026 15:22:23.263168  182377 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 15:22:23.405941  182377 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 15:22:23.412607  182377 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 15:22:23.412693  182377 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 15:22:23.431222  182377 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 15:22:23.431247  182377 start.go:495] detecting cgroup driver to use...
	I1026 15:22:23.431329  182377 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 15:22:23.449871  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 15:22:23.466135  182377 docker.go:218] disabling cri-docker service (if available) ...
	I1026 15:22:23.466207  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 15:22:23.483845  182377 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 15:22:23.499194  182377 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 15:22:23.646146  182377 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 15:22:23.864499  182377 docker.go:234] disabling docker service ...
	I1026 15:22:23.864576  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 15:22:23.882304  182377 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 15:22:23.897571  182377 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 15:22:24.064966  182377 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 15:22:24.201804  182377 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 15:22:24.216914  182377 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 15:22:24.239366  182377 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 15:22:24.239426  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.251236  182377 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1026 15:22:24.251318  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.263630  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.275134  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.287125  182377 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 15:22:24.302136  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.315011  182377 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.335688  182377 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 15:22:24.347573  182377 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 15:22:24.358181  182377 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 15:22:24.358260  182377 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 15:22:24.379177  182377 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 15:22:24.391253  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:24.532080  182377 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 15:22:24.652383  182377 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 15:22:24.652516  182377 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 15:22:24.658249  182377 start.go:563] Will wait 60s for crictl version
	I1026 15:22:24.658308  182377 ssh_runner.go:195] Run: which crictl
	I1026 15:22:24.662623  182377 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 15:22:24.701747  182377 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1026 15:22:24.701833  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.730381  182377 ssh_runner.go:195] Run: crio --version
	I1026 15:22:24.761145  182377 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1026 15:22:24.764994  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765410  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:24.765433  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:24.765621  182377 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1026 15:22:24.770397  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:24.787194  182377 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 15:22:24.788437  182377 kubeadm.go:883] updating cluster {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 15:22:24.788570  182377 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 15:22:24.788622  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:24.828217  182377 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1026 15:22:24.828316  182377 ssh_runner.go:195] Run: which lz4
	I1026 15:22:24.833073  182377 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1026 15:22:24.838213  182377 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1026 15:22:24.838246  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1026 15:22:26.232172  182377 crio.go:462] duration metric: took 1.399140151s to copy over tarball
	I1026 15:22:26.232290  182377 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1026 15:22:28.031969  182377 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.79963377s)
	I1026 15:22:28.032009  182377 crio.go:469] duration metric: took 1.799794706s to extract the tarball
	I1026 15:22:28.032019  182377 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1026 15:22:28.083266  182377 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 15:22:28.129231  182377 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 15:22:28.129262  182377 cache_images.go:85] Images are preloaded, skipping loading
	I1026 15:22:28.129271  182377 kubeadm.go:934] updating node { 192.168.61.33 8443 v1.34.1 crio true true} ...
	I1026 15:22:28.129386  182377 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-574718 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 15:22:28.129473  182377 ssh_runner.go:195] Run: crio config
	I1026 15:22:28.175414  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:28.175448  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:28.175493  182377 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 15:22:28.175532  182377 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.33 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-574718 NodeName:newest-cni-574718 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 15:22:28.175679  182377 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-574718"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.33"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.33"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 15:22:28.175746  182377 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 15:22:28.189114  182377 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 15:22:28.189184  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 15:22:28.201285  182377 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1026 15:22:28.222167  182377 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 15:22:28.241882  182377 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1026 15:22:28.262267  182377 ssh_runner.go:195] Run: grep 192.168.61.33	control-plane.minikube.internal$ /etc/hosts
	I1026 15:22:28.266495  182377 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 15:22:28.281183  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:28.445545  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:28.481631  182377 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718 for IP: 192.168.61.33
	I1026 15:22:28.481655  182377 certs.go:195] generating shared ca certs ...
	I1026 15:22:28.481672  182377 certs.go:227] acquiring lock for ca certs: {Name:mk93131c71acd79b9ab313e88723331b0af2d4bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:28.481853  182377 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key
	I1026 15:22:28.481904  182377 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key
	I1026 15:22:28.481916  182377 certs.go:257] generating profile certs ...
	I1026 15:22:28.482010  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/client.key
	I1026 15:22:28.482074  182377 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key.59f77b64
	I1026 15:22:28.482115  182377 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key
	I1026 15:22:28.482217  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem (1338 bytes)
	W1026 15:22:28.482254  182377 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233_empty.pem, impossibly tiny 0 bytes
	I1026 15:22:28.482262  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca-key.pem (1679 bytes)
	I1026 15:22:28.482285  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/ca.pem (1082 bytes)
	I1026 15:22:28.482316  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/cert.pem (1123 bytes)
	I1026 15:22:28.482340  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/certs/key.pem (1675 bytes)
	I1026 15:22:28.482379  182377 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem (1708 bytes)
	I1026 15:22:28.483044  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 15:22:28.517526  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 15:22:28.558414  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 15:22:28.586297  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 15:22:28.613805  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 15:22:28.642929  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 15:22:28.671810  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 15:22:28.700191  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/newest-cni-574718/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 15:22:28.729422  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 15:22:28.756494  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/certs/141233.pem --> /usr/share/ca-certificates/141233.pem (1338 bytes)
	I1026 15:22:28.783988  182377 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/ssl/certs/1412332.pem --> /usr/share/ca-certificates/1412332.pem (1708 bytes)
	I1026 15:22:28.812588  182377 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 15:22:28.832551  182377 ssh_runner.go:195] Run: openssl version
	I1026 15:22:28.838355  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 15:22:28.850638  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855574  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 14:16 /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.855636  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 15:22:28.862555  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 15:22:28.874412  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/141233.pem && ln -fs /usr/share/ca-certificates/141233.pem /etc/ssl/certs/141233.pem"
	I1026 15:22:28.886395  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891025  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 14:24 /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.891082  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/141233.pem
	I1026 15:22:28.897923  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/141233.pem /etc/ssl/certs/51391683.0"
	I1026 15:22:28.910115  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1412332.pem && ln -fs /usr/share/ca-certificates/1412332.pem /etc/ssl/certs/1412332.pem"
	I1026 15:22:28.922622  182377 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927296  182377 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 14:24 /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.927337  182377 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1412332.pem
	I1026 15:22:28.934138  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1412332.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 15:22:28.945693  182377 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 15:22:28.950557  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 15:22:28.957416  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 15:22:28.964523  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 15:22:28.971586  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 15:22:28.978762  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 15:22:28.986053  182377 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 15:22:28.993134  182377 kubeadm.go:400] StartCluster: {Name:newest-cni-574718 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-574718 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil>
ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 15:22:28.993263  182377 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 15:22:28.993323  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.032028  182377 cri.go:89] found id: ""
	I1026 15:22:29.032103  182377 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 15:22:29.043952  182377 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 15:22:29.043972  182377 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 15:22:29.044040  182377 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 15:22:29.056289  182377 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 15:22:29.057119  182377 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-574718" does not appear in /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:29.057648  182377 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-137233/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-574718" cluster setting kubeconfig missing "newest-cni-574718" context setting]
	I1026 15:22:29.058341  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:29.060135  182377 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 15:22:29.070432  182377 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.61.33
	I1026 15:22:29.070477  182377 kubeadm.go:1160] stopping kube-system containers ...
	I1026 15:22:29.070498  182377 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1026 15:22:29.070565  182377 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 15:22:29.108499  182377 cri.go:89] found id: ""
	I1026 15:22:29.108625  182377 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1026 15:22:29.128646  182377 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 15:22:29.140200  182377 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 15:22:29.140217  182377 kubeadm.go:157] found existing configuration files:
	
	I1026 15:22:29.140259  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 15:22:29.150547  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 15:22:29.150618  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 15:22:29.161551  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 15:22:29.171576  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 15:22:29.171637  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 15:22:29.182113  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.191928  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 15:22:29.191975  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 15:22:29.202335  182377 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 15:22:29.212043  182377 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 15:22:29.212089  182377 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 15:22:29.222315  182377 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 15:22:29.232961  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:29.285078  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:30.940058  182377 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.654938215s)
	I1026 15:22:30.940132  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.190262  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.246873  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:31.330409  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:31.330532  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:31.830602  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.330655  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:32.830666  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.330601  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:33.376334  182377 api_server.go:72] duration metric: took 2.045939712s to wait for apiserver process to appear ...
	I1026 15:22:33.376368  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:33.376393  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:33.377001  182377 api_server.go:269] stopped: https://192.168.61.33:8443/healthz: Get "https://192.168.61.33:8443/healthz": dial tcp 192.168.61.33:8443: connect: connection refused
	I1026 15:22:33.876665  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.154624  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.154676  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.154695  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.184996  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 15:22:36.185030  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 15:22:36.377426  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.382349  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.382371  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:36.876548  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:36.881970  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:36.882006  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.376698  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.384123  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 15:22:37.384156  182377 api_server.go:103] status: https://192.168.61.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 15:22:37.876774  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:37.882031  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:37.891824  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:37.891850  182377 api_server.go:131] duration metric: took 4.515475379s to wait for apiserver health ...
	I1026 15:22:37.891861  182377 cni.go:84] Creating CNI manager for ""
	I1026 15:22:37.891868  182377 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 15:22:37.893513  182377 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1026 15:22:37.894739  182377 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1026 15:22:37.909012  182377 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1026 15:22:37.935970  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:37.941779  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:37.941822  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:37.941834  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:37.941848  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:37.941862  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:37.941873  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 15:22:37.941878  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:37.941884  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:37.941889  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 15:22:37.941897  182377 system_pods.go:74] duration metric: took 5.899576ms to wait for pod list to return data ...
	I1026 15:22:37.941906  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:37.946827  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:37.946868  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:37.946885  182377 node_conditions.go:105] duration metric: took 4.973356ms to run NodePressure ...
	I1026 15:22:37.946955  182377 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1026 15:22:38.207008  182377 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 15:22:38.236075  182377 ops.go:34] apiserver oom_adj: -16
	I1026 15:22:38.236107  182377 kubeadm.go:601] duration metric: took 9.192128682s to restartPrimaryControlPlane
	I1026 15:22:38.236126  182377 kubeadm.go:402] duration metric: took 9.243002383s to StartCluster
	I1026 15:22:38.236154  182377 settings.go:142] acquiring lock: {Name:mk260d179873b5d5f15b4780b692965367036bbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.236270  182377 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:22:38.238433  182377 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-137233/kubeconfig: {Name:mka07626640e842c6c2177ad5f101c4a2dd91d4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 15:22:38.238827  182377 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.33 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 15:22:38.238959  182377 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 15:22:38.239088  182377 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-574718"
	I1026 15:22:38.239110  182377 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-574718"
	W1026 15:22:38.239120  182377 addons.go:247] addon storage-provisioner should already be in state true
	I1026 15:22:38.239127  182377 addons.go:69] Setting default-storageclass=true in profile "newest-cni-574718"
	I1026 15:22:38.239155  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239168  182377 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-574718"
	I1026 15:22:38.239190  182377 addons.go:69] Setting dashboard=true in profile "newest-cni-574718"
	I1026 15:22:38.239234  182377 addons.go:238] Setting addon dashboard=true in "newest-cni-574718"
	W1026 15:22:38.239252  182377 addons.go:247] addon dashboard should already be in state true
	I1026 15:22:38.239176  182377 config.go:182] Loaded profile config "newest-cni-574718": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:22:38.239296  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.239172  182377 addons.go:69] Setting metrics-server=true in profile "newest-cni-574718"
	I1026 15:22:38.239373  182377 addons.go:238] Setting addon metrics-server=true in "newest-cni-574718"
	W1026 15:22:38.239384  182377 addons.go:247] addon metrics-server should already be in state true
	I1026 15:22:38.239411  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.240384  182377 out.go:179] * Verifying Kubernetes components...
	I1026 15:22:38.241817  182377 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 15:22:38.243158  182377 addons.go:238] Setting addon default-storageclass=true in "newest-cni-574718"
	W1026 15:22:38.243174  182377 addons.go:247] addon default-storageclass should already be in state true
	I1026 15:22:38.243191  182377 host.go:66] Checking if "newest-cni-574718" exists ...
	I1026 15:22:38.243431  182377 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 15:22:38.243449  182377 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1026 15:22:38.243435  182377 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 15:22:38.244547  182377 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.244562  182377 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 15:22:38.244795  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 15:22:38.244828  182377 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 15:22:38.244850  182377 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.244868  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 15:22:38.245802  182377 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 15:22:38.246890  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 15:22:38.246914  182377 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 15:22:38.248534  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248638  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.248957  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249338  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249373  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249432  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249474  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249621  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.249648  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249665  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.249857  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.249989  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.250917  182377 main.go:141] libmachine: domain newest-cni-574718 has defined MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251364  182377 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7b:b5:97", ip: ""} in network mk-newest-cni-574718: {Iface:virbr3 ExpiryTime:2025-10-26 16:22:19 +0000 UTC Type:0 Mac:52:54:00:7b:b5:97 Iaid: IPaddr:192.168.61.33 Prefix:24 Hostname:newest-cni-574718 Clientid:01:52:54:00:7b:b5:97}
	I1026 15:22:38.251395  182377 main.go:141] libmachine: domain newest-cni-574718 has defined IP address 192.168.61.33 and MAC address 52:54:00:7b:b5:97 in network mk-newest-cni-574718
	I1026 15:22:38.251570  182377 sshutil.go:53] new ssh client: &{IP:192.168.61.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/newest-cni-574718/id_rsa Username:docker}
	I1026 15:22:38.548715  182377 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 15:22:38.574744  182377 api_server.go:52] waiting for apiserver process to appear ...
	I1026 15:22:38.574851  182377 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 15:22:38.594161  182377 api_server.go:72] duration metric: took 355.284664ms to wait for apiserver process to appear ...
	I1026 15:22:38.594202  182377 api_server.go:88] waiting for apiserver healthz status ...
	I1026 15:22:38.594226  182377 api_server.go:253] Checking apiserver healthz at https://192.168.61.33:8443/healthz ...
	I1026 15:22:38.599953  182377 api_server.go:279] https://192.168.61.33:8443/healthz returned 200:
	ok
	I1026 15:22:38.601088  182377 api_server.go:141] control plane version: v1.34.1
	I1026 15:22:38.601116  182377 api_server.go:131] duration metric: took 6.905101ms to wait for apiserver health ...
	I1026 15:22:38.601130  182377 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 15:22:38.604838  182377 system_pods.go:59] 8 kube-system pods found
	I1026 15:22:38.604863  182377 system_pods.go:61] "coredns-66bc5c9577-fbtqn" [317aed6d-9584-40f3-9d5c-9e3c670811e8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 15:22:38.604872  182377 system_pods.go:61] "etcd-newest-cni-574718" [527dfb34-9071-44bf-be3c-75921ad0c849] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 15:22:38.604886  182377 system_pods.go:61] "kube-apiserver-newest-cni-574718" [4285cb5e-4a30-4d87-8996-1f5fbe723525] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 15:22:38.604917  182377 system_pods.go:61] "kube-controller-manager-newest-cni-574718" [42199d84-c838-436b-ada5-de73d6269345] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 15:22:38.604924  182377 system_pods.go:61] "kube-proxy-f9l99" [5e0c5bab-fea7-41d6-bffe-b659055cf68c] Running
	I1026 15:22:38.604930  182377 system_pods.go:61] "kube-scheduler-newest-cni-574718" [0250002e-226b-45d2-a685-6e315db3d009] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 15:22:38.604934  182377 system_pods.go:61] "metrics-server-746fcd58dc-7vxxx" [15ffbc76-a090-4786-9808-18f8b4e5ebb8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 15:22:38.604940  182377 system_pods.go:61] "storage-provisioner" [4ec0a217-f2c8-4395-babe-ee26b81a7e69] Running
	I1026 15:22:38.604945  182377 system_pods.go:74] duration metric: took 3.809261ms to wait for pod list to return data ...
	I1026 15:22:38.604952  182377 default_sa.go:34] waiting for default service account to be created ...
	I1026 15:22:38.607878  182377 default_sa.go:45] found service account: "default"
	I1026 15:22:38.607900  182377 default_sa.go:55] duration metric: took 2.941228ms for default service account to be created ...
	I1026 15:22:38.607913  182377 kubeadm.go:586] duration metric: took 369.045368ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 15:22:38.607930  182377 node_conditions.go:102] verifying NodePressure condition ...
	I1026 15:22:38.610509  182377 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1026 15:22:38.610524  182377 node_conditions.go:123] node cpu capacity is 2
	I1026 15:22:38.610536  182377 node_conditions.go:105] duration metric: took 2.601775ms to run NodePressure ...
	I1026 15:22:38.610549  182377 start.go:241] waiting for startup goroutines ...
	I1026 15:22:38.736034  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 15:22:38.789628  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 15:22:38.810637  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 15:22:38.810662  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 15:22:38.831863  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 15:22:38.831893  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1026 15:22:38.877236  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 15:22:38.877280  182377 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 15:22:38.881939  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 15:22:38.881971  182377 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 15:22:38.934545  182377 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:38.934581  182377 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 15:22:38.950819  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 15:22:38.950852  182377 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 15:22:38.995779  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 15:22:39.021057  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 15:22:39.021079  182377 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 15:22:39.079563  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 15:22:39.079594  182377 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 15:22:39.132351  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 15:22:39.132382  182377 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 15:22:39.193426  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 15:22:39.193470  182377 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 15:22:39.235471  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 15:22:39.235496  182377 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 15:22:39.271746  182377 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:39.271773  182377 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 15:22:39.307718  182377 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 15:22:40.193013  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.403339708s)
	I1026 15:22:40.408827  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.413001507s)
	I1026 15:22:40.408876  182377 addons.go:479] Verifying addon metrics-server=true in "newest-cni-574718"
	I1026 15:22:40.667395  182377 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.359629965s)
	I1026 15:22:40.668723  182377 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-574718 addons enable metrics-server
	
	I1026 15:22:40.669858  182377 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1026 15:22:40.671055  182377 addons.go:514] duration metric: took 2.432108694s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1026 15:22:40.671096  182377 start.go:246] waiting for cluster config update ...
	I1026 15:22:40.671111  182377 start.go:255] writing updated cluster config ...
	I1026 15:22:40.671384  182377 ssh_runner.go:195] Run: rm -f paused
	I1026 15:22:40.721560  182377 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 15:22:40.722854  182377 out.go:179] * Done! kubectl is now configured to use "newest-cni-574718" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 15:40:23 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:23.993577264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493223993521669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6df62ed-591c-43f9-825b-af5d4e426b45 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:23 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:23.994313517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73356d02-5a63-4636-99cc-c74f1ead66d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:23 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:23.994454148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73356d02-5a63-4636-99cc-c74f1ead66d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:23 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:23.994735367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493078724828240,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=73356d02-5a63-4636-99cc-c74f1ead66d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.032115143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=917230ae-1f2e-4a47-a777-338db884ad9d name=/runtime.v1.RuntimeService/Version
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.032270377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=917230ae-1f2e-4a47-a777-338db884ad9d name=/runtime.v1.RuntimeService/Version
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.034183475Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b2dc8c43-6679-48d5-b6d5-de0ef886a2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.035101645Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493224034759138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b2dc8c43-6679-48d5-b6d5-de0ef886a2e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.036851286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=902feda7-bb8c-42cc-ae3b-d396d5cadf96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.037050789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=902feda7-bb8c-42cc-ae3b-d396d5cadf96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.037461543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493078724828240,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=902feda7-bb8c-42cc-ae3b-d396d5cadf96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.072318938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66c73f05-1186-4cda-84b6-dbfa1e6c2ad1 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.072706277Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66c73f05-1186-4cda-84b6-dbfa1e6c2ad1 name=/runtime.v1.RuntimeService/Version
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.073794539Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2879b6e-33c3-4689-980f-7d64083ab4e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.074462353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493224074436469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2879b6e-33c3-4689-980f-7d64083ab4e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.075176640Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a44b29b3-95a2-4929-bf40-87e7d1d7d56c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.075304387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a44b29b3-95a2-4929-bf40-87e7d1d7d56c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.075589821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493078724828240,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=a44b29b3-95a2-4929-bf40-87e7d1d7d56c name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.110859579Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d367e4dd-cbf1-4ee1-97c4-5283ebb8383d name=/runtime.v1.RuntimeService/Version
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.110985265Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d367e4dd-cbf1-4ee1-97c4-5283ebb8383d name=/runtime.v1.RuntimeService/Version
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.113685943Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=efe440e2-4af9-4ca1-a7eb-63fd021b225e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.114221592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761493224114192991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efe440e2-4af9-4ca1-a7eb-63fd021b225e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.114756994Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=165d7f25-5a91-4710-a61f-74f839ba4ba7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.114980851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=165d7f25-5a91-4710-a61f-74f839ba4ba7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 26 15:40:24 default-k8s-diff-port-705037 crio[882]: time="2025-10-26 15:40:24.115412751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5,PodSandboxId:0db4ee11cf8f6547a642f65b30ec30ade1bcf9e3b4220dc00b17de4f9878b779,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761493078724828240,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-k9ssm,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 847870b5-f0a5-4e62-948d-006420575ba0,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78f2d85d5d3897a0b9cbe341785ab10092923a05b49f358982dd9a3f5c779c8c,PodSandboxId:bcd0a7ea7a5d6e103fc7708bc70d01512047660b62f41576a88205f4f6703fd7,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761492168659339177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761492154385792829,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]
string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce,PodSandboxId:1592430b39646fa93b92ed34c469481ea6ba2a72f29a62e863c7bb325d7cd4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761492133087872705,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-fs558,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35c18482-b39d-4e3f-aafd-51642938f5b0,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38,PodSandboxId:885c149c4d0c4c2918bf935cca13b4ab267f244925355d335b6afd84bd86eabe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761492124084005205,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kr5kl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7598b50f-deee-406f-86fc-1f57c2de4887,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557,PodSandboxId:e9e54484fe80f31d1a071a64c81e96d4a7e7900dc0666f430532cf36ac16daa9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761492124104408623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 974398e3-6fd7-44da-9ec6-a726c71c9e43,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa,PodSandboxId:393a00e3f8d416d7933ef5894352dc23e4b694c6d92c1e2ed9d778dc1a9bdffd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUN
NING,CreatedAt:1761492120445632329,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be371a0653beff17fc8179eadadb47ba,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047,PodSandboxId:9bb3db4855e82964e1440b256ac2e4566ce40d9f863d2416877cf24ebd75c316,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761492120434222475,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 734913a4a596eb14eb488c352898c34e,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b,PodSandboxId:0f393b3f130a63425f87e19d45543796d7e07b7ed4abf90a19f5c867429ae9f4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbaf
e7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761492120381736115,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad4036b91abbf32d9bc0629e6b234cf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2,PodSandboxId:d0d99bc0545f2c576e
1c4881e50f4c58b10cac1e059676da641bbc6d088d9431,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761492120346793896,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-705037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64eca7cc9b3960c61fd085cf0d208e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=165d7f25-5a91-4710-a61f-74f839ba4ba7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	acf8ba23ba814       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      2 minutes ago       Exited              dashboard-metrics-scraper   8                   0db4ee11cf8f6       dashboard-metrics-scraper-6ffb444bf9-k9ssm
	78f2d85d5d389       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                     1                   bcd0a7ea7a5d6       busybox
	e6dc43f94cb76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner         3                   e9e54484fe80f       storage-provisioner
	0412bc06733f8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago      Running             coredns                     1                   1592430b39646       coredns-66bc5c9577-fs558
	67a44ce6a7fe8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner         2                   e9e54484fe80f       storage-provisioner
	e941043507acf       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      18 minutes ago      Running             kube-proxy                  1                   885c149c4d0c4       kube-proxy-kr5kl
	4dd8207831f2d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      18 minutes ago      Running             kube-scheduler              1                   393a00e3f8d41       kube-scheduler-default-k8s-diff-port-705037
	b62b8fa07e019       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago      Running             etcd                        1                   9bb3db4855e82       etcd-default-k8s-diff-port-705037
	d681a01f97923       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      18 minutes ago      Running             kube-controller-manager     1                   0f393b3f130a6       kube-controller-manager-default-k8s-diff-port-705037
	1cf3d81d69ccc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      18 minutes ago      Running             kube-apiserver              1                   d0d99bc0545f2       kube-apiserver-default-k8s-diff-port-705037
	
	
	==> coredns [0412bc06733f8fd0774bd8f073900d3d9db7d5a5cf536fb50551e009b7fa3fce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33122 - 62530 "HINFO IN 6525439122859490430.3700641182551545693. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029488252s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-705037
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-705037
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=78443ca5b4f916bb82a6168756565c438d616c46
	                    minikube.k8s.io/name=default-k8s-diff-port-705037
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T15_19_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 15:19:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-705037
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 15:40:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 15:37:23 +0000   Sun, 26 Oct 2025 15:19:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 15:37:23 +0000   Sun, 26 Oct 2025 15:19:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 15:37:23 +0000   Sun, 26 Oct 2025 15:19:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 15:37:23 +0000   Sun, 26 Oct 2025 15:22:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.253
	  Hostname:    default-k8s-diff-port-705037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3042712Ki
	  pods:               110
	System Info:
	  Machine ID:                 a056f452638844dc8e66f24d5e133cba
	  System UUID:                a056f452-6388-44dc-8e66-f24d5e133cba
	  Boot ID:                    2f85c34a-af7e-46e9-ad10-a1b5ca5b3806
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-fs558                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-default-k8s-diff-port-705037                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-705037             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-705037    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-kr5kl                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-705037             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-nsvb5                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k9ssm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c8wqg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                21m                kubelet          Node default-k8s-diff-port-705037 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node default-k8s-diff-port-705037 event: Registered Node default-k8s-diff-port-705037 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-705037 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node default-k8s-diff-port-705037 has been rebooted, boot id: 2f85c34a-af7e-46e9-ad10-a1b5ca5b3806
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-705037 event: Registered Node default-k8s-diff-port-705037 in Controller
	
	
	==> dmesg <==
	[Oct26 15:21] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001579] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000998] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.786519] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.124278] kauditd_printk_skb: 88 callbacks suppressed
	[Oct26 15:22] kauditd_printk_skb: 196 callbacks suppressed
	[  +0.077380] kauditd_printk_skb: 218 callbacks suppressed
	[  +1.602137] kauditd_printk_skb: 134 callbacks suppressed
	[  +0.034945] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.623285] kauditd_printk_skb: 6 callbacks suppressed
	[ +11.030191] kauditd_printk_skb: 5 callbacks suppressed
	[Oct26 15:23] kauditd_printk_skb: 27 callbacks suppressed
	[Oct26 15:25] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:27] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:32] kauditd_printk_skb: 6 callbacks suppressed
	[Oct26 15:37] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [b62b8fa07e019bae4bcb3a7e00b13211a1422309b5e2b3e490e08cf683e50047] <==
	{"level":"warn","ts":"2025-10-26T15:22:02.352132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.378078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.405427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.419887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.440847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.454673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.462578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.480315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.488679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.500989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.522556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.535041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.547169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.558621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.577356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.584078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.593999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:02.705984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T15:22:31.020266Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.152639ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7885127989601838997 > lease_revoke:<id:6d6d9a211cb5271f>","response":"size:28"}
	{"level":"info","ts":"2025-10-26T15:32:01.556417Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1051}
	{"level":"info","ts":"2025-10-26T15:32:01.577530Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1051,"took":"20.861493ms","hash":3474550068,"current-db-size-bytes":3268608,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1335296,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-10-26T15:32:01.577576Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3474550068,"revision":1051,"compact-revision":-1}
	{"level":"info","ts":"2025-10-26T15:37:01.563065Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1331}
	{"level":"info","ts":"2025-10-26T15:37:01.566140Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1331,"took":"2.766048ms","hash":1424629584,"current-db-size-bytes":3268608,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-10-26T15:37:01.566187Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1424629584,"revision":1331,"compact-revision":1051}
	
	
	==> kernel <==
	 15:40:24 up 18 min,  0 users,  load average: 0.24, 0.13, 0.09
	Linux default-k8s-diff-port-705037 6.6.95 #1 SMP PREEMPT_DYNAMIC Thu Oct 16 13:22:30 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1cf3d81d69cccd2980452083a91aef44484e541762cd9e1304b3ee2e6c6826a2] <==
	E1026 15:37:04.491295       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:37:04.491308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1026 15:37:04.491345       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:37:04.492547       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:38:04.492048       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:38:04.492086       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:38:04.492129       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:38:04.493265       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:38:04.493329       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:38:04.493338       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:40:04.493328       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:40:04.493391       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 15:40:04.493404       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 15:40:04.493427       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 15:40:04.493501       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 15:40:04.495521       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [d681a01f9792386a937644a3faeb309289acee370899afe44b650d7cb7ccb97b] <==
	I1026 15:34:08.095177       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:34:38.044462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:34:38.102403       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:35:08.049245       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:35:08.110504       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:35:38.053739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:35:38.117630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:36:08.061243       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:36:08.126834       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:36:38.066160       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:36:38.134743       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:37:08.071292       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:37:08.142587       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:37:38.075331       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:37:38.150173       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:38:08.080568       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:38:08.157544       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:38:38.085019       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:38:38.167147       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:39:08.090121       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:39:08.174761       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:39:38.095969       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:39:38.187508       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1026 15:40:08.100474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 15:40:08.195323       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [e941043507acf00395dec8fb4b6a8dcbbf419dd34749e3a514ef04e1cddfea38] <==
	I1026 15:22:04.292327       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 15:22:04.394477       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 15:22:04.394520       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.253"]
	E1026 15:22:04.394617       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 15:22:04.469563       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1026 15:22:04.469654       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 15:22:04.469720       1 server_linux.go:132] "Using iptables Proxier"
	I1026 15:22:04.508220       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 15:22:04.508746       1 server.go:527] "Version info" version="v1.34.1"
	I1026 15:22:04.508807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:22:04.514600       1 config.go:200] "Starting service config controller"
	I1026 15:22:04.514664       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 15:22:04.514682       1 config.go:106] "Starting endpoint slice config controller"
	I1026 15:22:04.514686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 15:22:04.514695       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 15:22:04.514780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 15:22:04.522689       1 config.go:309] "Starting node config controller"
	I1026 15:22:04.523596       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 15:22:04.523851       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 15:22:04.614825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 15:22:04.614865       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 15:22:04.614883       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4dd8207831f2d4babe5e8eb34960616d69228f6d5d7816a3727851f8eaac22aa] <==
	I1026 15:22:03.395771       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 15:22:03.402633       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 15:22:03.402745       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:22:03.402771       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 15:22:03.403484       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 15:22:03.443272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 15:22:03.443366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 15:22:03.443873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 15:22:03.444333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 15:22:03.444416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 15:22:03.444498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 15:22:03.444553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 15:22:03.444644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 15:22:03.444737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 15:22:03.444804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 15:22:03.444869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 15:22:03.447027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 15:22:03.447128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 15:22:03.447187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 15:22:03.447252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 15:22:03.447373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 15:22:03.447436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 15:22:03.447497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 15:22:03.448798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1026 15:22:04.303395       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 15:39:34 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:39:34.713942    1214 scope.go:117] "RemoveContainer" containerID="acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5"
	Oct 26 15:39:34 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:34.714545    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:39:38 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:38.714859    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg" podUID="cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4"
	Oct 26 15:39:39 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:39.966745    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493179966522992  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:39 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:39.966765    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493179966522992  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:46 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:39:46.714341    1214 scope.go:117] "RemoveContainer" containerID="acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5"
	Oct 26 15:39:46 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:46.714554    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:39:47 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:47.715093    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:39:49 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:49.968159    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493189967686317  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:49 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:49.968182    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493189967686317  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:50 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:50.715412    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg" podUID="cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4"
	Oct 26 15:39:59 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:59.970287    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493199969950625  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:39:59 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:39:59.970344    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493199969950625  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:40:00 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:40:00.713009    1214 scope.go:117] "RemoveContainer" containerID="acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5"
	Oct 26 15:40:00 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:00.713216    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:40:01 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:01.716540    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:40:01 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:01.716582    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg" podUID="cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4"
	Oct 26 15:40:09 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:09.971465    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493209971180428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:40:09 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:09.971489    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493209971180428  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:40:14 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:14.714727    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nsvb5" podUID="28c11adc-3f4d-46bc-abc5-f9b466e2ca10"
	Oct 26 15:40:15 default-k8s-diff-port-705037 kubelet[1214]: I1026 15:40:15.716722    1214 scope.go:117] "RemoveContainer" containerID="acf8ba23ba81449327f74cbafb6a6a5db4bd289149986b20a9416a0e3e5ec3e5"
	Oct 26 15:40:15 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:15.716846    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-k9ssm_kubernetes-dashboard(847870b5-f0a5-4e62-948d-006420575ba0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k9ssm" podUID="847870b5-f0a5-4e62-948d-006420575ba0"
	Oct 26 15:40:15 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:15.718572    1214 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c8wqg" podUID="cc5b36c9-7c56-4a05-8b30-8bf6d2b12ef4"
	Oct 26 15:40:19 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:19.972737    1214 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761493219972420214  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Oct 26 15:40:19 default-k8s-diff-port-705037 kubelet[1214]: E1026 15:40:19.972758    1214 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761493219972420214  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [67a44ce6a7fe8ed1fe16737d1cd5997ede10c6cdc177d1c4811a71bf5dd0e557] <==
	I1026 15:22:04.227382       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 15:22:34.231775       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e6dc43f94cb762259a9a89d79a1060cd93f7b74968e9896a7d880a5f2e1b62b0] <==
	W1026 15:39:58.731632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:00.735410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:00.743169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:02.746329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:02.751829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:04.754730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:04.759420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:06.763463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:06.768939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:08.772603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:08.777651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:10.782368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:10.790165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:12.793278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:12.798139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:14.801067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:14.808862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:16.811775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:16.816993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:18.819500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:18.826859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:20.830427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:20.835086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:22.838691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 15:40:22.843616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 describe pod metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-705037 describe pod metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg: exit status 1 (64.045573ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-nsvb5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-c8wqg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-705037 describe pod metrics-server-746fcd58dc-nsvb5 kubernetes-dashboard-855c9754f9-c8wqg: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.19s)

                                                
                                    

Test pass (276/323)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 28.07
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 13.61
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.67
22 TestOffline 54.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 198.01
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 12.52
35 TestAddons/parallel/Registry 19.02
36 TestAddons/parallel/RegistryCreds 0.69
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 6.79
41 TestAddons/parallel/CSI 63.41
42 TestAddons/parallel/Headlamp 17.4
43 TestAddons/parallel/CloudSpanner 6.89
44 TestAddons/parallel/LocalPath 56.03
45 TestAddons/parallel/NvidiaDevicePlugin 6.57
46 TestAddons/parallel/Yakd 11.94
48 TestAddons/StoppedEnableDisable 73.91
49 TestCertOptions 75.19
50 TestCertExpiration 286.58
52 TestForceSystemdFlag 58.49
53 TestForceSystemdEnv 55.54
58 TestErrorSpam/setup 35.62
59 TestErrorSpam/start 0.33
60 TestErrorSpam/status 0.65
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 5.37
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 50.84
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.83
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 2.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 32.18
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.33
86 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/serial/InvalidService 4.66
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 14.53
91 TestFunctional/parallel/DryRun 0.21
92 TestFunctional/parallel/InternationalLanguage 0.11
93 TestFunctional/parallel/StatusCmd 0.68
97 TestFunctional/parallel/ServiceCmdConnect 19.73
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 44.84
101 TestFunctional/parallel/SSHCmd 0.34
102 TestFunctional/parallel/CpCmd 1
103 TestFunctional/parallel/MySQL 22.76
104 TestFunctional/parallel/FileSync 0.17
105 TestFunctional/parallel/CertSync 1.05
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
113 TestFunctional/parallel/License 0.46
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.82
119 TestFunctional/parallel/ImageCommands/Setup 1.93
120 TestFunctional/parallel/Version/short 0.08
121 TestFunctional/parallel/Version/components 0.67
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
135 TestFunctional/parallel/MountCmd/any-port 19.07
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.2
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.42
139 TestFunctional/parallel/ImageCommands/ImageRemove 1.1
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.56
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
142 TestFunctional/parallel/MountCmd/specific-port 1.26
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
144 TestFunctional/parallel/ServiceCmd/DeployApp 11.39
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
146 TestFunctional/parallel/ProfileCmd/profile_list 0.33
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
148 TestFunctional/parallel/ServiceCmd/List 1.27
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.27
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
151 TestFunctional/parallel/ServiceCmd/Format 0.29
152 TestFunctional/parallel/ServiceCmd/URL 0.32
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 192.81
161 TestMultiControlPlane/serial/DeployApp 9.18
162 TestMultiControlPlane/serial/PingHostFromPods 1.25
163 TestMultiControlPlane/serial/AddWorkerNode 45.86
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
166 TestMultiControlPlane/serial/CopyFile 10.38
167 TestMultiControlPlane/serial/StopSecondaryNode 89.94
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
169 TestMultiControlPlane/serial/RestartSecondaryNode 34.8
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.73
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 359.58
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.14
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 254.55
175 TestMultiControlPlane/serial/RestartCluster 95.39
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.5
177 TestMultiControlPlane/serial/AddSecondaryNode 74.65
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.7
182 TestJSONOutput/start/Command 74.91
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.69
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.63
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.82
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.22
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 75.32
214 TestMountStart/serial/StartWithMountFirst 20.92
215 TestMountStart/serial/VerifyMountFirst 0.31
216 TestMountStart/serial/StartWithMountSecond 20.79
217 TestMountStart/serial/VerifyMountSecond 0.3
218 TestMountStart/serial/DeleteFirst 0.67
219 TestMountStart/serial/VerifyMountPostDelete 0.31
220 TestMountStart/serial/Stop 1.18
221 TestMountStart/serial/RestartStopped 18.33
222 TestMountStart/serial/VerifyMountPostStop 0.3
225 TestMultiNode/serial/FreshStart2Nodes 96.17
226 TestMultiNode/serial/DeployApp2Nodes 6.17
227 TestMultiNode/serial/PingHostFrom2Pods 0.85
228 TestMultiNode/serial/AddNode 42.48
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.44
231 TestMultiNode/serial/CopyFile 5.92
232 TestMultiNode/serial/StopNode 2.13
233 TestMultiNode/serial/StartAfterStop 39.57
234 TestMultiNode/serial/RestartKeepsNodes 288.3
235 TestMultiNode/serial/DeleteNode 2.54
236 TestMultiNode/serial/StopMultiNode 163.2
237 TestMultiNode/serial/RestartMultiNode 83.7
238 TestMultiNode/serial/ValidateNameConflict 39.16
245 TestScheduledStopUnix 107.79
249 TestRunningBinaryUpgrade 140.17
251 TestKubernetesUpgrade 121.14
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 76.34
256 TestNoKubernetes/serial/StartWithStopK8s 44.97
257 TestNoKubernetes/serial/Start 26.42
265 TestNetworkPlugins/group/false 3.85
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
270 TestNoKubernetes/serial/ProfileList 0.86
271 TestNoKubernetes/serial/Stop 1.42
272 TestNoKubernetes/serial/StartNoArgs 34.19
281 TestPause/serial/Start 106.91
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
283 TestStoppedBinaryUpgrade/Setup 3.96
284 TestStoppedBinaryUpgrade/Upgrade 110.58
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.07
287 TestNetworkPlugins/group/auto/Start 53.76
288 TestNetworkPlugins/group/kindnet/Start 74.54
289 TestNetworkPlugins/group/calico/Start 66.92
290 TestNetworkPlugins/group/auto/KubeletFlags 0.18
291 TestNetworkPlugins/group/auto/NetCatPod 11.24
292 TestNetworkPlugins/group/auto/DNS 0.14
293 TestNetworkPlugins/group/auto/Localhost 0.61
294 TestNetworkPlugins/group/auto/HairPin 0.12
295 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
296 TestNetworkPlugins/group/custom-flannel/Start 74.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
298 TestNetworkPlugins/group/kindnet/NetCatPod 11.22
299 TestNetworkPlugins/group/kindnet/DNS 0.17
300 TestNetworkPlugins/group/kindnet/Localhost 0.15
301 TestNetworkPlugins/group/kindnet/HairPin 0.12
302 TestNetworkPlugins/group/calico/ControllerPod 6.01
303 TestNetworkPlugins/group/enable-default-cni/Start 89.78
304 TestNetworkPlugins/group/calico/KubeletFlags 0.22
305 TestNetworkPlugins/group/calico/NetCatPod 12.26
306 TestNetworkPlugins/group/calico/DNS 0.16
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.16
309 TestNetworkPlugins/group/flannel/Start 71.39
310 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
311 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
312 TestNetworkPlugins/group/custom-flannel/DNS 0.15
313 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
314 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
315 TestNetworkPlugins/group/bridge/Start 83.64
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.22
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
321 TestNetworkPlugins/group/flannel/ControllerPod 6.01
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
323 TestNetworkPlugins/group/flannel/NetCatPod 11.27
325 TestStartStop/group/old-k8s-version/serial/FirstStart 95.02
326 TestNetworkPlugins/group/flannel/DNS 0.17
327 TestNetworkPlugins/group/flannel/Localhost 0.15
328 TestNetworkPlugins/group/flannel/HairPin 0.16
330 TestStartStop/group/no-preload/serial/FirstStart 100.55
331 TestNetworkPlugins/group/bridge/KubeletFlags 0.18
332 TestNetworkPlugins/group/bridge/NetCatPod 11.24
333 TestNetworkPlugins/group/bridge/DNS 0.19
334 TestNetworkPlugins/group/bridge/Localhost 0.14
335 TestNetworkPlugins/group/bridge/HairPin 0.16
337 TestStartStop/group/embed-certs/serial/FirstStart 87.6
338 TestStartStop/group/old-k8s-version/serial/DeployApp 11.39
339 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
340 TestStartStop/group/old-k8s-version/serial/Stop 81.26
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 80.6
343 TestStartStop/group/no-preload/serial/DeployApp 11.27
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
345 TestStartStop/group/no-preload/serial/Stop 88.96
346 TestStartStop/group/embed-certs/serial/DeployApp 10.32
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
348 TestStartStop/group/embed-certs/serial/Stop 82.16
349 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
350 TestStartStop/group/old-k8s-version/serial/SecondStart 37.9
351 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
352 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.96
353 TestStartStop/group/default-k8s-diff-port/serial/Stop 88.85
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
355 TestStartStop/group/no-preload/serial/SecondStart 56.92
356 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 19.01
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
359 TestStartStop/group/embed-certs/serial/SecondStart 44.48
360 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
361 TestStartStop/group/old-k8s-version/serial/Pause 2.77
363 TestStartStop/group/newest-cni/serial/FirstStart 54.92
364 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
367 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.68
369 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
370 TestStartStop/group/no-preload/serial/Pause 2.74
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
373 TestStartStop/group/newest-cni/serial/Stop 10.95
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
375 TestStartStop/group/newest-cni/serial/SecondStart 33.07
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/newest-cni/serial/Pause 3.81
383 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
384 TestStartStop/group/embed-certs/serial/Pause 2.3
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.4
x
+
TestDownloadOnly/v1.28.0/json-events (28.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-908936 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-908936 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.073310046s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (28.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1026 14:15:21.520686  141233 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1026 14:15:21.520825  141233 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-908936
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-908936: exit status 85 (77.902097ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-908936 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-908936 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:14:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:14:53.499966  141244 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:14:53.500224  141244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:53.500233  141244 out.go:374] Setting ErrFile to fd 2...
	I1026 14:14:53.500237  141244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:14:53.500462  141244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	W1026 14:14:53.500579  141244 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21664-137233/.minikube/config/config.json: open /home/jenkins/minikube-integration/21664-137233/.minikube/config/config.json: no such file or directory
	I1026 14:14:53.501066  141244 out.go:368] Setting JSON to true
	I1026 14:14:53.502656  141244 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3427,"bootTime":1761484666,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:14:53.502743  141244 start.go:141] virtualization: kvm guest
	I1026 14:14:53.504526  141244 out.go:99] [download-only-908936] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1026 14:14:53.504675  141244 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 14:14:53.504726  141244 notify.go:220] Checking for updates...
	I1026 14:14:53.505875  141244 out.go:171] MINIKUBE_LOCATION=21664
	I1026 14:14:53.507013  141244 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:14:53.508086  141244 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 14:14:53.509156  141244 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:14:53.510271  141244 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 14:14:53.515676  141244 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 14:14:53.515883  141244 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:14:54.000612  141244 out.go:99] Using the kvm2 driver based on user configuration
	I1026 14:14:54.000653  141244 start.go:305] selected driver: kvm2
	I1026 14:14:54.000660  141244 start.go:925] validating driver "kvm2" against <nil>
	I1026 14:14:54.001018  141244 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:14:54.001556  141244 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1026 14:14:54.001729  141244 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 14:14:54.001752  141244 cni.go:84] Creating CNI manager for ""
	I1026 14:14:54.001808  141244 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 14:14:54.001820  141244 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 14:14:54.001889  141244 start.go:349] cluster config:
	{Name:download-only-908936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-908936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:14:54.002051  141244 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:14:54.003494  141244 out.go:99] Downloading VM boot image ...
	I1026 14:14:54.003530  141244 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21664-137233/.minikube/cache/iso/amd64/minikube-v1.37.0-1760609724-21757-amd64.iso
	I1026 14:15:06.471296  141244 out.go:99] Starting "download-only-908936" primary control-plane node in "download-only-908936" cluster
	I1026 14:15:06.471318  141244 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 14:15:06.582623  141244 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1026 14:15:06.582657  141244 cache.go:58] Caching tarball of preloaded images
	I1026 14:15:06.583451  141244 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 14:15:06.585143  141244 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1026 14:15:06.585157  141244 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1026 14:15:06.698630  141244 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1026 14:15:06.698755  141244 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-908936 host does not exist
	  To start a cluster, run: "minikube start -p download-only-908936"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-908936
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-183267 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-183267 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.607642339s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1026 14:15:35.519127  141233 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1026 14:15:35.519172  141233 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-183267
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-183267: exit status 85 (81.167381ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-908936 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-908936 │ jenkins │ v1.37.0 │ 26 Oct 25 14:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │ 26 Oct 25 14:15 UTC │
	│ delete  │ -p download-only-908936                                                                                                                                                 │ download-only-908936 │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │ 26 Oct 25 14:15 UTC │
	│ start   │ -o=json --download-only -p download-only-183267 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-183267 │ jenkins │ v1.37.0 │ 26 Oct 25 14:15 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 14:15:21
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 14:15:21.964714  141522 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:15:21.965006  141522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:15:21.965016  141522 out.go:374] Setting ErrFile to fd 2...
	I1026 14:15:21.965022  141522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:15:21.965274  141522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:15:21.965755  141522 out.go:368] Setting JSON to true
	I1026 14:15:21.966741  141522 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3456,"bootTime":1761484666,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:15:21.966833  141522 start.go:141] virtualization: kvm guest
	I1026 14:15:21.968694  141522 out.go:99] [download-only-183267] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:15:21.968833  141522 notify.go:220] Checking for updates...
	I1026 14:15:21.970213  141522 out.go:171] MINIKUBE_LOCATION=21664
	I1026 14:15:21.971501  141522 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:15:21.972816  141522 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 14:15:21.974142  141522 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:15:21.975327  141522 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 14:15:21.977311  141522 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 14:15:21.977616  141522 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:15:22.008666  141522 out.go:99] Using the kvm2 driver based on user configuration
	I1026 14:15:22.008700  141522 start.go:305] selected driver: kvm2
	I1026 14:15:22.008706  141522 start.go:925] validating driver "kvm2" against <nil>
	I1026 14:15:22.009011  141522 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 14:15:22.009509  141522 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1026 14:15:22.009657  141522 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 14:15:22.009678  141522 cni.go:84] Creating CNI manager for ""
	I1026 14:15:22.009728  141522 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1026 14:15:22.009738  141522 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1026 14:15:22.009776  141522 start.go:349] cluster config:
	{Name:download-only-183267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-183267 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:15:22.009874  141522 iso.go:125] acquiring lock: {Name:mkfe78fcc13f0f0cc3fec30206c34a5da423b32d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 14:15:22.011109  141522 out.go:99] Starting "download-only-183267" primary control-plane node in "download-only-183267" cluster
	I1026 14:15:22.011126  141522 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:22.118729  141522 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 14:15:22.118777  141522 cache.go:58] Caching tarball of preloaded images
	I1026 14:15:22.118948  141522 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 14:15:22.120738  141522 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1026 14:15:22.120758  141522 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1026 14:15:22.232933  141522 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1026 14:15:22.232982  141522 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21664-137233/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-183267 host does not exist
	  To start a cluster, run: "minikube start -p download-only-183267"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-183267
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 14:15:36.199892  141233 binary.go:78] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-143028 --alsologtostderr --binary-mirror http://127.0.0.1:45021 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-143028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-143028
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (54.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-085842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-085842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (53.699997763s)
helpers_test.go:175: Cleaning up "offline-crio-085842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-085842
--- PASS: TestOffline (54.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-061252
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-061252: exit status 85 (72.900049ms)

                                                
                                                
-- stdout --
	* Profile "addons-061252" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-061252"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-061252
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-061252: exit status 85 (72.790703ms)

                                                
                                                
-- stdout --
	* Profile "addons-061252" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-061252"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (198.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-061252 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-061252 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m18.009281396s)
--- PASS: TestAddons/Setup (198.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-061252 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-061252 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (12.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-061252 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-061252 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6fe79797-3a20-4bb9-83df-48301b29d260] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6fe79797-3a20-4bb9-83df-48301b29d260] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 12.004364982s
addons_test.go:694: (dbg) Run:  kubectl --context addons-061252 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-061252 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-061252 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (12.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.228179ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-cbv4c" [7d3cca1e-f530-4267-a552-8536b1621127] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00316825s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rst9d" [5d630e1a-522c-4021-aa39-21738869a7c4] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005079686s
addons_test.go:392: (dbg) Run:  kubectl --context addons-061252 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-061252 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-061252 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.157603696s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 ip
2025/10/26 14:19:34 [DEBUG] GET http://192.168.39.34:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.02s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.20703ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-061252
addons_test.go:332: (dbg) Run:  kubectl --context addons-061252 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-n9d9d" [94543f49-5882-4838-97b6-bdddbc37c91c] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003028971s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.09764ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jgpx5" [8b47107f-7c68-4a56-82cf-e908c35fc406] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003645238s
addons_test.go:463: (dbg) Run:  kubectl --context addons-061252 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 14:19:29.718330  141233 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1026 14:19:29.724632  141233 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 14:19:29.724661  141233 kapi.go:107] duration metric: took 6.336838ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.349808ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-061252 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-061252 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bb9c5870-5099-4c7b-82f3-239dceb912be] Pending
helpers_test.go:352: "task-pv-pod" [bb9c5870-5099-4c7b-82f3-239dceb912be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bb9c5870-5099-4c7b-82f3-239dceb912be] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.00435971s
addons_test.go:572: (dbg) Run:  kubectl --context addons-061252 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-061252 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-061252 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-061252 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-061252 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-061252 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-061252 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ba0d32d0-6b7e-4a72-a894-74b2fe482236] Pending
helpers_test.go:352: "task-pv-pod-restore" [ba0d32d0-6b7e-4a72-a894-74b2fe482236] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ba0d32d0-6b7e-4a72-a894-74b2fe482236] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004124548s
addons_test.go:614: (dbg) Run:  kubectl --context addons-061252 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-061252 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-061252 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-061252 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.843801743s)
--- PASS: TestAddons/parallel/CSI (63.41s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-061252 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-061252 --alsologtostderr -v=1: (1.104618806s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-glnc2" [f3f50f78-ed48-47ad-89b5-4a048f08fdd1] Pending
helpers_test.go:352: "headlamp-6945c6f4d-glnc2" [f3f50f78-ed48-47ad-89b5-4a048f08fdd1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-glnc2" [f3f50f78-ed48-47ad-89b5-4a048f08fdd1] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.002983284s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (17.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-lvrz9" [4167c8c3-4895-4432-ad18-4c00143f2f30] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005891808s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.89s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.03s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-061252 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-061252 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8991caf6-a36c-4a24-bab7-a9f8b8e50e74] Pending
helpers_test.go:352: "test-local-path" [8991caf6-a36c-4a24-bab7-a9f8b8e50e74] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8991caf6-a36c-4a24-bab7-a9f8b8e50e74] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8991caf6-a36c-4a24-bab7-a9f8b8e50e74] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006321919s
addons_test.go:967: (dbg) Run:  kubectl --context addons-061252 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 ssh "cat /opt/local-path-provisioner/pvc-aa911efc-959d-403c-96ae-f4cc24f83eca_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-061252 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-061252 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-061252 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.148935839s)
--- PASS: TestAddons/parallel/LocalPath (56.03s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6wtxh" [b47844e1-10f4-4b23-ae63-5df39995a764] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004842706s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-p5zg9" [2b73d453-96af-435b-a595-f3734989abf7] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004057101s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-061252 addons disable yakd --alsologtostderr -v=1: (5.934990132s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (73.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-061252
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-061252: (1m13.696413204s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-061252
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-061252
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-061252
--- PASS: TestAddons/StoppedEnableDisable (73.91s)

                                                
                                    
x
+
TestCertOptions (75.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-584872 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1026 15:08:55.618747  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-584872 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.701425118s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-584872 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-584872 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-584872 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-584872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-584872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-584872: (1.04115094s)
--- PASS: TestCertOptions (75.19s)

                                                
                                    
x
+
TestCertExpiration (286.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-553579 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-553579 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.555052012s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-553579 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-553579 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (49.135438651s)
helpers_test.go:175: Cleaning up "cert-expiration-553579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-553579
--- PASS: TestCertExpiration (286.58s)

                                                
                                    
x
+
TestForceSystemdFlag (58.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-267995 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-267995 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (57.315570441s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-267995 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-267995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-267995
--- PASS: TestForceSystemdFlag (58.49s)

                                                
                                    
x
+
TestForceSystemdEnv (55.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-059721 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-059721 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.638104215s)
helpers_test.go:175: Cleaning up "force-systemd-env-059721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-059721
--- PASS: TestForceSystemdEnv (55.54s)

                                                
                                    
x
+
TestErrorSpam/setup (35.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-476225 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-476225 --driver=kvm2  --container-runtime=crio
E1026 14:23:55.625669  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:55.632067  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:55.643402  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:55.664810  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:55.706196  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:55.787651  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:55.949188  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:56.270970  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:56.912709  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:23:58.194343  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:24:00.757329  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:24:05.878893  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-476225 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-476225 --driver=kvm2  --container-runtime=crio: (35.624830794s)
--- PASS: TestErrorSpam/setup (35.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 status
--- PASS: TestErrorSpam/status (0.65s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 pause
E1026 14:24:16.120850  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (5.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 stop: (1.833716595s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 stop: (1.55771724s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-476225 --log_dir /tmp/nospam-476225 stop: (1.97563733s)
--- PASS: TestErrorSpam/stop (5.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21664-137233/.minikube/files/etc/test/nested/copy/141233/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-946873 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1026 14:24:36.602444  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-946873 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (50.835192394s)
--- PASS: TestFunctional/serial/StartWithProxy (50.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 14:25:15.242170  141233 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-946873 --alsologtostderr -v=8
E1026 14:25:17.564663  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-946873 --alsologtostderr -v=8: (37.833531147s)
functional_test.go:678: soft start took 37.834199724s for "functional-946873" cluster.
I1026 14:25:53.076086  141233 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-946873 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 cache add registry.k8s.io/pause:3.1: (1.098586065s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 cache add registry.k8s.io/pause:3.3: (1.087874157s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 cache add registry.k8s.io/pause:latest: (1.134208082s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-946873 /tmp/TestFunctionalserialCacheCmdcacheadd_local1453135336/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cache add minikube-local-cache-test:functional-946873
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 cache add minikube-local-cache-test:functional-946873: (1.917369759s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cache delete minikube-local-cache-test:functional-946873
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-946873
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (177.300001ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 kubectl -- --context functional-946873 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-946873 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-946873 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-946873 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.183308998s)
functional_test.go:776: restart took 32.183444743s for "functional-946873" cluster.
I1026 14:26:33.156965  141233 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-946873 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 logs: (1.328147835s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 logs --file /tmp/TestFunctionalserialLogsFileCmd724830871/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 logs --file /tmp/TestFunctionalserialLogsFileCmd724830871/001/logs.txt: (1.328132591s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-946873 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-946873
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-946873: exit status 115 (296.999785ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.57:31281 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-946873 delete -f testdata/invalidsvc.yaml
E1026 14:26:39.486035  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2332: (dbg) Done: kubectl --context functional-946873 delete -f testdata/invalidsvc.yaml: (1.170827076s)
--- PASS: TestFunctional/serial/InvalidService (4.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 config get cpus: exit status 14 (65.109112ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 config get cpus: exit status 14 (61.266814ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-946873 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-946873 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 148012: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-946873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-946873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (108.958543ms)

                                                
                                                
-- stdout --
	* [functional-946873] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:27:07.061809  147968 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:27:07.062077  147968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:27:07.062087  147968 out.go:374] Setting ErrFile to fd 2...
	I1026 14:27:07.062091  147968 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:27:07.062274  147968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:27:07.062710  147968 out.go:368] Setting JSON to false
	I1026 14:27:07.063522  147968 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4161,"bootTime":1761484666,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:27:07.063620  147968 start.go:141] virtualization: kvm guest
	I1026 14:27:07.065191  147968 out.go:179] * [functional-946873] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 14:27:07.066154  147968 notify.go:220] Checking for updates...
	I1026 14:27:07.066195  147968 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:27:07.067239  147968 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:27:07.068178  147968 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 14:27:07.069139  147968 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:27:07.070069  147968 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:27:07.070903  147968 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:27:07.072106  147968 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:27:07.072569  147968 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:27:07.104325  147968 out.go:179] * Using the kvm2 driver based on existing profile
	I1026 14:27:07.105203  147968 start.go:305] selected driver: kvm2
	I1026 14:27:07.105216  147968 start.go:925] validating driver "kvm2" against &{Name:functional-946873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-946873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:27:07.105320  147968 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:27:07.107181  147968 out.go:203] 
	W1026 14:27:07.108050  147968 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 14:27:07.108858  147968 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-946873 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-946873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-946873 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (107.794662ms)

                                                
                                                
-- stdout --
	* [functional-946873] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:27:06.270058  147920 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:27:06.270416  147920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:27:06.270431  147920 out.go:374] Setting ErrFile to fd 2...
	I1026 14:27:06.270437  147920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:27:06.271134  147920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:27:06.272067  147920 out.go:368] Setting JSON to false
	I1026 14:27:06.272865  147920 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4160,"bootTime":1761484666,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 14:27:06.272969  147920 start.go:141] virtualization: kvm guest
	I1026 14:27:06.274372  147920 out.go:179] * [functional-946873] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1026 14:27:06.275423  147920 notify.go:220] Checking for updates...
	I1026 14:27:06.275430  147920 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 14:27:06.276482  147920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 14:27:06.277433  147920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 14:27:06.278428  147920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 14:27:06.279351  147920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 14:27:06.280249  147920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 14:27:06.281498  147920 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:27:06.281870  147920 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 14:27:06.311388  147920 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1026 14:27:06.312199  147920 start.go:305] selected driver: kvm2
	I1026 14:27:06.312214  147920 start.go:925] validating driver "kvm2" against &{Name:functional-946873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-946873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 14:27:06.312328  147920 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 14:27:06.313946  147920 out.go:203] 
	W1026 14:27:06.314766  147920 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 14:27:06.315569  147920 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (19.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-946873 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-946873 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-8blhp" [ece8ab31-6d61-4af6-b680-0643832517f4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-8blhp" [ece8ab31-6d61-4af6-b680-0643832517f4] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.002919156s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.57:31312
functional_test.go:1680: http://192.168.39.57:31312: success! body:
Request served by hello-node-connect-7d85dfc575-8blhp

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.57:31312
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (19.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ad37f4f5-9666-4c07-a965-3b03cca30242] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005773585s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-946873 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-946873 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-946873 get pvc myclaim -o=json
I1026 14:26:47.187173  141233 retry.go:31] will retry after 2.192941918s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:a2a03290-7de5-467f-9b89-ed5111a692d0 ResourceVersion:670 Generation:0 CreationTimestamp:2025-10-26 14:26:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019261e0 VolumeMode:0xc0019261f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-946873 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-946873 apply -f testdata/storage-provisioner/pod.yaml
I1026 14:26:49.805924  141233 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [3adc6edb-fe72-45a2-93ae-e5c3d46b5ccf] Pending
helpers_test.go:352: "sp-pod" [3adc6edb-fe72-45a2-93ae-e5c3d46b5ccf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [3adc6edb-fe72-45a2-93ae-e5c3d46b5ccf] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004336947s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-946873 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-946873 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-946873 delete -f testdata/storage-provisioner/pod.yaml: (1.523747012s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-946873 apply -f testdata/storage-provisioner/pod.yaml
I1026 14:27:13.597305  141233 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ae3a88a9-602b-4837-a7b2-3c77664211b6] Pending
helpers_test.go:352: "sp-pod" [ae3a88a9-602b-4837-a7b2-3c77664211b6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ae3a88a9-602b-4837-a7b2-3c77664211b6] Running
2025/10/26 14:27:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.010100772s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-946873 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh -n functional-946873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cp functional-946873:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1157448885/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh -n functional-946873 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh -n functional-946873 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-946873 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-c6nhx" [cd0317fe-9f5b-41f7-87b6-b1cb7b5a6658] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-c6nhx" [cd0317fe-9f5b-41f7-87b6-b1cb7b5a6658] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003872259s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-946873 exec mysql-5bb876957f-c6nhx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-946873 exec mysql-5bb876957f-c6nhx -- mysql -ppassword -e "show databases;": exit status 1 (129.730781ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 14:27:01.538259  141233 retry.go:31] will retry after 688.185593ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-946873 exec mysql-5bb876957f-c6nhx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-946873 exec mysql-5bb876957f-c6nhx -- mysql -ppassword -e "show databases;": exit status 1 (428.083731ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 14:27:02.655853  141233 retry.go:31] will retry after 1.108832454s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-946873 exec mysql-5bb876957f-c6nhx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/141233/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /etc/test/nested/copy/141233/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/141233.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /etc/ssl/certs/141233.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/141233.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /usr/share/ca-certificates/141233.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1412332.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /etc/ssl/certs/1412332.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1412332.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /usr/share/ca-certificates/1412332.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-946873 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh "sudo systemctl is-active docker": exit status 1 (169.755614ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh "sudo systemctl is-active containerd": exit status 1 (168.717266ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-946873 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-946873
localhost/kicbase/echo-server:functional-946873
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-946873 image ls --format short --alsologtostderr:
I1026 14:27:16.444126  148159 out.go:360] Setting OutFile to fd 1 ...
I1026 14:27:16.444431  148159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:16.444442  148159 out.go:374] Setting ErrFile to fd 2...
I1026 14:27:16.444446  148159 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:16.444649  148159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
I1026 14:27:16.445225  148159 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:16.445314  148159 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:16.447556  148159 ssh_runner.go:195] Run: systemctl --version
I1026 14:27:16.449909  148159 main.go:141] libmachine: domain functional-946873 has defined MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:16.450330  148159 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:95:f6", ip: ""} in network mk-functional-946873: {Iface:virbr1 ExpiryTime:2025-10-26 15:24:39 +0000 UTC Type:0 Mac:52:54:00:ac:95:f6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-946873 Clientid:01:52:54:00:ac:95:f6}
I1026 14:27:16.450368  148159 main.go:141] libmachine: domain functional-946873 has defined IP address 192.168.39.57 and MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:16.450558  148159 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/functional-946873/id_rsa Username:docker}
I1026 14:27:16.542185  148159 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-946873 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-946873  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-946873  │ ede928bcd3a67 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-946873 image ls --format table --alsologtostderr:
I1026 14:27:19.098772  148254 out.go:360] Setting OutFile to fd 1 ...
I1026 14:27:19.099032  148254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:19.099040  148254 out.go:374] Setting ErrFile to fd 2...
I1026 14:27:19.099045  148254 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:19.099245  148254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
I1026 14:27:19.099878  148254 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:19.099971  148254 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:19.102325  148254 ssh_runner.go:195] Run: systemctl --version
I1026 14:27:19.104626  148254 main.go:141] libmachine: domain functional-946873 has defined MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:19.105012  148254 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:95:f6", ip: ""} in network mk-functional-946873: {Iface:virbr1 ExpiryTime:2025-10-26 15:24:39 +0000 UTC Type:0 Mac:52:54:00:ac:95:f6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-946873 Clientid:01:52:54:00:ac:95:f6}
I1026 14:27:19.105041  148254 main.go:141] libmachine: domain functional-946873 has defined IP address 192.168.39.57 and MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:19.105180  148254 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/functional-946873/id_rsa Username:docker}
I1026 14:27:19.189727  148254 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-946873 image ls --format json --alsologtostderr:
[{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340e
ce6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"ede928bcd3a67df6b5f7a47f945dd57fc79004ca6bcd1dd15ec3164e18561bbc","repoDigests":["localhost/minikube-local-cache-test@sha256:7564f08f41b74ac6c2cda484d8ea1033f0071db7ab85c43895f3dda77b75646e"],"repoTags":["localhost/minikube-local-cache-test:functional-946873"],"size":"3330"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73
ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25
f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"409467f978b4a30fe71701
2736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["regis
try.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae6829615007
8d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-946873"],"size":"4943877"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-946873 image ls --format json --alsologtostderr:
I1026 14:27:18.866411  148243 out.go:360] Setting OutFile to fd 1 ...
I1026 14:27:18.866688  148243 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:18.866698  148243 out.go:374] Setting ErrFile to fd 2...
I1026 14:27:18.866703  148243 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:18.866951  148243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
I1026 14:27:18.867588  148243 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:18.867703  148243 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:18.870030  148243 ssh_runner.go:195] Run: systemctl --version
I1026 14:27:18.872412  148243 main.go:141] libmachine: domain functional-946873 has defined MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:18.872841  148243 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:95:f6", ip: ""} in network mk-functional-946873: {Iface:virbr1 ExpiryTime:2025-10-26 15:24:39 +0000 UTC Type:0 Mac:52:54:00:ac:95:f6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-946873 Clientid:01:52:54:00:ac:95:f6}
I1026 14:27:18.872868  148243 main.go:141] libmachine: domain functional-946873 has defined IP address 192.168.39.57 and MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:18.873022  148243 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/functional-946873/id_rsa Username:docker}
I1026 14:27:18.959641  148243 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-946873 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-946873
size: "4943877"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: ede928bcd3a67df6b5f7a47f945dd57fc79004ca6bcd1dd15ec3164e18561bbc
repoDigests:
- localhost/minikube-local-cache-test@sha256:7564f08f41b74ac6c2cda484d8ea1033f0071db7ab85c43895f3dda77b75646e
repoTags:
- localhost/minikube-local-cache-test:functional-946873
size: "3330"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-946873 image ls --format yaml --alsologtostderr:
I1026 14:27:16.694809  148170 out.go:360] Setting OutFile to fd 1 ...
I1026 14:27:16.695105  148170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:16.695119  148170 out.go:374] Setting ErrFile to fd 2...
I1026 14:27:16.695126  148170 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:16.695382  148170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
I1026 14:27:16.696017  148170 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:16.696112  148170 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:16.698353  148170 ssh_runner.go:195] Run: systemctl --version
I1026 14:27:16.700788  148170 main.go:141] libmachine: domain functional-946873 has defined MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:16.701348  148170 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:95:f6", ip: ""} in network mk-functional-946873: {Iface:virbr1 ExpiryTime:2025-10-26 15:24:39 +0000 UTC Type:0 Mac:52:54:00:ac:95:f6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-946873 Clientid:01:52:54:00:ac:95:f6}
I1026 14:27:16.701394  148170 main.go:141] libmachine: domain functional-946873 has defined IP address 192.168.39.57 and MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:16.701586  148170 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/functional-946873/id_rsa Username:docker}
I1026 14:27:16.797153  148170 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh pgrep buildkitd: exit status 1 (192.073074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image build -t localhost/my-image:functional-946873 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 image build -t localhost/my-image:functional-946873 testdata/build --alsologtostderr: (4.430099211s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-946873 image build -t localhost/my-image:functional-946873 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 14a2982574e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-946873
--> a2317100889
Successfully tagged localhost/my-image:functional-946873
a23171008899699c6683eff745a1f43c3a7482e7745c1f63ebd66866d278b9cf
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-946873 image build -t localhost/my-image:functional-946873 testdata/build --alsologtostderr:
I1026 14:27:17.202150  148201 out.go:360] Setting OutFile to fd 1 ...
I1026 14:27:17.202516  148201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:17.202526  148201 out.go:374] Setting ErrFile to fd 2...
I1026 14:27:17.202534  148201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 14:27:17.202834  148201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
I1026 14:27:17.203515  148201 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:17.204260  148201 config.go:182] Loaded profile config "functional-946873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 14:27:17.206987  148201 ssh_runner.go:195] Run: systemctl --version
I1026 14:27:17.211350  148201 main.go:141] libmachine: domain functional-946873 has defined MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:17.211884  148201 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:95:f6", ip: ""} in network mk-functional-946873: {Iface:virbr1 ExpiryTime:2025-10-26 15:24:39 +0000 UTC Type:0 Mac:52:54:00:ac:95:f6 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:functional-946873 Clientid:01:52:54:00:ac:95:f6}
I1026 14:27:17.211926  148201 main.go:141] libmachine: domain functional-946873 has defined IP address 192.168.39.57 and MAC address 52:54:00:ac:95:f6 in network mk-functional-946873
I1026 14:27:17.212108  148201 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/functional-946873/id_rsa Username:docker}
I1026 14:27:17.329865  148201 build_images.go:161] Building image from path: /tmp/build.1729779081.tar
I1026 14:27:17.329956  148201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 14:27:17.349219  148201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1729779081.tar
I1026 14:27:17.358213  148201 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1729779081.tar: stat -c "%s %y" /var/lib/minikube/build/build.1729779081.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1729779081.tar': No such file or directory
I1026 14:27:17.358249  148201 ssh_runner.go:362] scp /tmp/build.1729779081.tar --> /var/lib/minikube/build/build.1729779081.tar (3072 bytes)
I1026 14:27:17.418711  148201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1729779081
I1026 14:27:17.435766  148201 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1729779081 -xf /var/lib/minikube/build/build.1729779081.tar
I1026 14:27:17.455235  148201 crio.go:315] Building image: /var/lib/minikube/build/build.1729779081
I1026 14:27:17.455320  148201 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-946873 /var/lib/minikube/build/build.1729779081 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1026 14:27:21.512052  148201 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-946873 /var/lib/minikube/build/build.1729779081 --cgroup-manager=cgroupfs: (4.056706317s)
I1026 14:27:21.512122  148201 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1729779081
I1026 14:27:21.529256  148201 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1729779081.tar
I1026 14:27:21.541054  148201 build_images.go:217] Built localhost/my-image:functional-946873 from /tmp/build.1729779081.tar
I1026 14:27:21.541086  148201 build_images.go:133] succeeded building to: functional-946873
I1026 14:27:21.541092  148201 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.911711064s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-946873
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image load --daemon kicbase/echo-server:functional-946873 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 image load --daemon kicbase/echo-server:functional-946873 --alsologtostderr: (1.125761362s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdany-port4108778799/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761488803385791919" to /tmp/TestFunctionalparallelMountCmdany-port4108778799/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761488803385791919" to /tmp/TestFunctionalparallelMountCmdany-port4108778799/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761488803385791919" to /tmp/TestFunctionalparallelMountCmdany-port4108778799/001/test-1761488803385791919
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.081222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:26:43.546241  141233 retry.go:31] will retry after 507.262252ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 14:26 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 14:26 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 14:26 test-1761488803385791919
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh cat /mount-9p/test-1761488803385791919
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-946873 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [754ef966-23bb-4a9b-91d4-60f4b75d0cc7] Pending
helpers_test.go:352: "busybox-mount" [754ef966-23bb-4a9b-91d4-60f4b75d0cc7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [754ef966-23bb-4a9b-91d4-60f4b75d0cc7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [754ef966-23bb-4a9b-91d4-60f4b75d0cc7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 17.004722519s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-946873 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdany-port4108778799/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image load --daemon kicbase/echo-server:functional-946873 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-946873
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image load --daemon kicbase/echo-server:functional-946873 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image save kicbase/echo-server:functional-946873 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 image save kicbase/echo-server:functional-946873 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.421465955s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image rm kicbase/echo-server:functional-946873 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.360435112s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-946873
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 image save --daemon kicbase/echo-server:functional-946873 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-946873
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdspecific-port3795191118/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (230.859844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:27:02.688523  141233 retry.go:31] will retry after 255.199781ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdspecific-port3795191118/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh "sudo umount -f /mount-9p": exit status 1 (201.705914ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-946873 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdspecific-port3795191118/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1348443274/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1348443274/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1348443274/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T" /mount1: exit status 1 (273.22975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 14:27:03.992827  141233 retry.go:31] will retry after 626.244059ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-946873 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1348443274/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1348443274/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-946873 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1348443274/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-946873 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-946873 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-pnk8g" [88950433-161b-45c6-8f92-084574b1fcfb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-pnk8g" [88950433-161b-45c6-8f92-084574b1fcfb] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004938413s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "274.362696ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.103304ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "238.323327ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.424922ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 service list: (1.264941487s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-946873 service list -o json: (1.264992289s)
functional_test.go:1504: Took "1.265082793s" to run "out/minikube-linux-amd64 -p functional-946873 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.57:31915
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-946873 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.57:31915
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-946873
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-946873
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-946873
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1026 14:28:55.618396  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:29:23.327702  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m12.281960748s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (192.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 kubectl -- rollout status deployment/busybox: (6.857941965s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-jrtkv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-q55jp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-zmfgq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-jrtkv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-q55jp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-zmfgq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-jrtkv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-q55jp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-zmfgq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-jrtkv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-jrtkv -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-q55jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-q55jp -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-zmfgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 kubectl -- exec busybox-7b57f96db7-zmfgq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 node add --alsologtostderr -v 5: (45.2256568s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-500839 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp testdata/cp-test.txt ha-500839:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile920630264/001/cp-test_ha-500839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839:/home/docker/cp-test.txt ha-500839-m02:/home/docker/cp-test_ha-500839_ha-500839-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test_ha-500839_ha-500839-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839:/home/docker/cp-test.txt ha-500839-m03:/home/docker/cp-test_ha-500839_ha-500839-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test_ha-500839_ha-500839-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839:/home/docker/cp-test.txt ha-500839-m04:/home/docker/cp-test_ha-500839_ha-500839-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test_ha-500839_ha-500839-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp testdata/cp-test.txt ha-500839-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile920630264/001/cp-test_ha-500839-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m02:/home/docker/cp-test.txt ha-500839:/home/docker/cp-test_ha-500839-m02_ha-500839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test_ha-500839-m02_ha-500839.txt"
E1026 14:31:40.876023  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:31:40.882495  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:31:40.893923  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:31:40.915272  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:31:40.956633  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m02:/home/docker/cp-test.txt ha-500839-m03:/home/docker/cp-test_ha-500839-m02_ha-500839-m03.txt
E1026 14:31:41.038597  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:31:41.200103  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test_ha-500839-m02_ha-500839-m03.txt"
E1026 14:31:41.522220  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m02:/home/docker/cp-test.txt ha-500839-m04:/home/docker/cp-test_ha-500839-m02_ha-500839-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test_ha-500839-m02_ha-500839-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp testdata/cp-test.txt ha-500839-m03:/home/docker/cp-test.txt
E1026 14:31:42.164080  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile920630264/001/cp-test_ha-500839-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m03:/home/docker/cp-test.txt ha-500839:/home/docker/cp-test_ha-500839-m03_ha-500839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test_ha-500839-m03_ha-500839.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m03:/home/docker/cp-test.txt ha-500839-m02:/home/docker/cp-test_ha-500839-m03_ha-500839-m02.txt
E1026 14:31:43.445853  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test_ha-500839-m03_ha-500839-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m03:/home/docker/cp-test.txt ha-500839-m04:/home/docker/cp-test_ha-500839-m03_ha-500839-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test_ha-500839-m03_ha-500839-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp testdata/cp-test.txt ha-500839-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile920630264/001/cp-test_ha-500839-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m04:/home/docker/cp-test.txt ha-500839:/home/docker/cp-test_ha-500839-m04_ha-500839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839 "sudo cat /home/docker/cp-test_ha-500839-m04_ha-500839.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m04:/home/docker/cp-test.txt ha-500839-m02:/home/docker/cp-test_ha-500839-m04_ha-500839-m02.txt
E1026 14:31:46.007225  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m02 "sudo cat /home/docker/cp-test_ha-500839-m04_ha-500839-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 cp ha-500839-m04:/home/docker/cp-test.txt ha-500839-m03:/home/docker/cp-test_ha-500839-m04_ha-500839-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 ssh -n ha-500839-m03 "sudo cat /home/docker/cp-test_ha-500839-m04_ha-500839-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (89.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node stop m02 --alsologtostderr -v 5
E1026 14:31:51.129264  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:32:01.371188  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:32:21.852671  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:33:02.815618  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 node stop m02 --alsologtostderr -v 5: (1m29.460640475s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5: exit status 7 (482.07811ms)

                                                
                                                
-- stdout --
	ha-500839
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-500839-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500839-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-500839-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:33:16.549021  151269 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:33:16.549323  151269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:33:16.549335  151269 out.go:374] Setting ErrFile to fd 2...
	I1026 14:33:16.549339  151269 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:33:16.549616  151269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:33:16.549847  151269 out.go:368] Setting JSON to false
	I1026 14:33:16.549908  151269 mustload.go:65] Loading cluster: ha-500839
	I1026 14:33:16.550010  151269 notify.go:220] Checking for updates...
	I1026 14:33:16.550420  151269 config.go:182] Loaded profile config "ha-500839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:33:16.550437  151269 status.go:174] checking status of ha-500839 ...
	I1026 14:33:16.552430  151269 status.go:371] ha-500839 host status = "Running" (err=<nil>)
	I1026 14:33:16.552450  151269 host.go:66] Checking if "ha-500839" exists ...
	I1026 14:33:16.554999  151269 main.go:141] libmachine: domain ha-500839 has defined MAC address 52:54:00:fe:32:e6 in network mk-ha-500839
	I1026 14:33:16.555511  151269 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fe:32:e6", ip: ""} in network mk-ha-500839: {Iface:virbr1 ExpiryTime:2025-10-26 15:27:42 +0000 UTC Type:0 Mac:52:54:00:fe:32:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:ha-500839 Clientid:01:52:54:00:fe:32:e6}
	I1026 14:33:16.555542  151269 main.go:141] libmachine: domain ha-500839 has defined IP address 192.168.39.158 and MAC address 52:54:00:fe:32:e6 in network mk-ha-500839
	I1026 14:33:16.555701  151269 host.go:66] Checking if "ha-500839" exists ...
	I1026 14:33:16.555933  151269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:33:16.558352  151269 main.go:141] libmachine: domain ha-500839 has defined MAC address 52:54:00:fe:32:e6 in network mk-ha-500839
	I1026 14:33:16.558854  151269 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fe:32:e6", ip: ""} in network mk-ha-500839: {Iface:virbr1 ExpiryTime:2025-10-26 15:27:42 +0000 UTC Type:0 Mac:52:54:00:fe:32:e6 Iaid: IPaddr:192.168.39.158 Prefix:24 Hostname:ha-500839 Clientid:01:52:54:00:fe:32:e6}
	I1026 14:33:16.558903  151269 main.go:141] libmachine: domain ha-500839 has defined IP address 192.168.39.158 and MAC address 52:54:00:fe:32:e6 in network mk-ha-500839
	I1026 14:33:16.559101  151269 sshutil.go:53] new ssh client: &{IP:192.168.39.158 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/ha-500839/id_rsa Username:docker}
	I1026 14:33:16.646340  151269 ssh_runner.go:195] Run: systemctl --version
	I1026 14:33:16.652492  151269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:33:16.670504  151269 kubeconfig.go:125] found "ha-500839" server: "https://192.168.39.254:8443"
	I1026 14:33:16.670554  151269 api_server.go:166] Checking apiserver status ...
	I1026 14:33:16.670603  151269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:33:16.691031  151269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	W1026 14:33:16.702875  151269 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 14:33:16.702955  151269 ssh_runner.go:195] Run: ls
	I1026 14:33:16.707948  151269 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1026 14:33:16.713179  151269 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1026 14:33:16.713208  151269 status.go:463] ha-500839 apiserver status = Running (err=<nil>)
	I1026 14:33:16.713223  151269 status.go:176] ha-500839 status: &{Name:ha-500839 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:33:16.713243  151269 status.go:174] checking status of ha-500839-m02 ...
	I1026 14:33:16.714933  151269 status.go:371] ha-500839-m02 host status = "Stopped" (err=<nil>)
	I1026 14:33:16.714951  151269 status.go:384] host is not running, skipping remaining checks
	I1026 14:33:16.714957  151269 status.go:176] ha-500839-m02 status: &{Name:ha-500839-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:33:16.714974  151269 status.go:174] checking status of ha-500839-m03 ...
	I1026 14:33:16.716070  151269 status.go:371] ha-500839-m03 host status = "Running" (err=<nil>)
	I1026 14:33:16.716087  151269 host.go:66] Checking if "ha-500839-m03" exists ...
	I1026 14:33:16.718197  151269 main.go:141] libmachine: domain ha-500839-m03 has defined MAC address 52:54:00:16:84:91 in network mk-ha-500839
	I1026 14:33:16.718590  151269 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:16:84:91", ip: ""} in network mk-ha-500839: {Iface:virbr1 ExpiryTime:2025-10-26 15:29:34 +0000 UTC Type:0 Mac:52:54:00:16:84:91 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-500839-m03 Clientid:01:52:54:00:16:84:91}
	I1026 14:33:16.718627  151269 main.go:141] libmachine: domain ha-500839-m03 has defined IP address 192.168.39.251 and MAC address 52:54:00:16:84:91 in network mk-ha-500839
	I1026 14:33:16.718762  151269 host.go:66] Checking if "ha-500839-m03" exists ...
	I1026 14:33:16.719000  151269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:33:16.720941  151269 main.go:141] libmachine: domain ha-500839-m03 has defined MAC address 52:54:00:16:84:91 in network mk-ha-500839
	I1026 14:33:16.721312  151269 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:16:84:91", ip: ""} in network mk-ha-500839: {Iface:virbr1 ExpiryTime:2025-10-26 15:29:34 +0000 UTC Type:0 Mac:52:54:00:16:84:91 Iaid: IPaddr:192.168.39.251 Prefix:24 Hostname:ha-500839-m03 Clientid:01:52:54:00:16:84:91}
	I1026 14:33:16.721347  151269 main.go:141] libmachine: domain ha-500839-m03 has defined IP address 192.168.39.251 and MAC address 52:54:00:16:84:91 in network mk-ha-500839
	I1026 14:33:16.721488  151269 sshutil.go:53] new ssh client: &{IP:192.168.39.251 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/ha-500839-m03/id_rsa Username:docker}
	I1026 14:33:16.803612  151269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:33:16.820697  151269 kubeconfig.go:125] found "ha-500839" server: "https://192.168.39.254:8443"
	I1026 14:33:16.820736  151269 api_server.go:166] Checking apiserver status ...
	I1026 14:33:16.820807  151269 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:33:16.840027  151269 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1748/cgroup
	W1026 14:33:16.851640  151269 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1748/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 14:33:16.851757  151269 ssh_runner.go:195] Run: ls
	I1026 14:33:16.856962  151269 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1026 14:33:16.862307  151269 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1026 14:33:16.862334  151269 status.go:463] ha-500839-m03 apiserver status = Running (err=<nil>)
	I1026 14:33:16.862345  151269 status.go:176] ha-500839-m03 status: &{Name:ha-500839-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:33:16.862362  151269 status.go:174] checking status of ha-500839-m04 ...
	I1026 14:33:16.863924  151269 status.go:371] ha-500839-m04 host status = "Running" (err=<nil>)
	I1026 14:33:16.863945  151269 host.go:66] Checking if "ha-500839-m04" exists ...
	I1026 14:33:16.866125  151269 main.go:141] libmachine: domain ha-500839-m04 has defined MAC address 52:54:00:09:91:22 in network mk-ha-500839
	I1026 14:33:16.866560  151269 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:91:22", ip: ""} in network mk-ha-500839: {Iface:virbr1 ExpiryTime:2025-10-26 15:31:05 +0000 UTC Type:0 Mac:52:54:00:09:91:22 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-500839-m04 Clientid:01:52:54:00:09:91:22}
	I1026 14:33:16.866588  151269 main.go:141] libmachine: domain ha-500839-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:09:91:22 in network mk-ha-500839
	I1026 14:33:16.866714  151269 host.go:66] Checking if "ha-500839-m04" exists ...
	I1026 14:33:16.866969  151269 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:33:16.868835  151269 main.go:141] libmachine: domain ha-500839-m04 has defined MAC address 52:54:00:09:91:22 in network mk-ha-500839
	I1026 14:33:16.869209  151269 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:09:91:22", ip: ""} in network mk-ha-500839: {Iface:virbr1 ExpiryTime:2025-10-26 15:31:05 +0000 UTC Type:0 Mac:52:54:00:09:91:22 Iaid: IPaddr:192.168.39.222 Prefix:24 Hostname:ha-500839-m04 Clientid:01:52:54:00:09:91:22}
	I1026 14:33:16.869243  151269 main.go:141] libmachine: domain ha-500839-m04 has defined IP address 192.168.39.222 and MAC address 52:54:00:09:91:22 in network mk-ha-500839
	I1026 14:33:16.869365  151269 sshutil.go:53] new ssh client: &{IP:192.168.39.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/ha-500839-m04/id_rsa Username:docker}
	I1026 14:33:16.951618  151269 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:33:16.967961  151269 status.go:176] ha-500839-m04 status: &{Name:ha-500839-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (89.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 node start m02 --alsologtostderr -v 5: (34.002613966s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 stop --alsologtostderr -v 5
E1026 14:33:55.618344  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:34:24.738064  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:36:40.875988  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:37:08.582131  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 stop --alsologtostderr -v 5: (3m59.829263566s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 start --wait true --alsologtostderr -v 5
E1026 14:38:55.617772  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 start --wait true --alsologtostderr -v 5: (1m59.606688463s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (359.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 node delete m03 --alsologtostderr -v 5: (17.534828066s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (254.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 stop --alsologtostderr -v 5
E1026 14:40:18.689981  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:41:40.876109  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:43:55.621508  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 stop --alsologtostderr -v 5: (4m14.479731465s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5: exit status 7 (65.577243ms)

                                                
                                                
-- stdout --
	ha-500839
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500839-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-500839-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:44:25.769636  154478 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:44:25.769782  154478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:44:25.769793  154478 out.go:374] Setting ErrFile to fd 2...
	I1026 14:44:25.769799  154478 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:44:25.769996  154478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:44:25.770214  154478 out.go:368] Setting JSON to false
	I1026 14:44:25.770261  154478 mustload.go:65] Loading cluster: ha-500839
	I1026 14:44:25.770470  154478 notify.go:220] Checking for updates...
	I1026 14:44:25.771591  154478 config.go:182] Loaded profile config "ha-500839": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:44:25.771628  154478 status.go:174] checking status of ha-500839 ...
	I1026 14:44:25.773763  154478 status.go:371] ha-500839 host status = "Stopped" (err=<nil>)
	I1026 14:44:25.773782  154478 status.go:384] host is not running, skipping remaining checks
	I1026 14:44:25.773788  154478 status.go:176] ha-500839 status: &{Name:ha-500839 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:44:25.773829  154478 status.go:174] checking status of ha-500839-m02 ...
	I1026 14:44:25.774860  154478 status.go:371] ha-500839-m02 host status = "Stopped" (err=<nil>)
	I1026 14:44:25.774875  154478 status.go:384] host is not running, skipping remaining checks
	I1026 14:44:25.774880  154478 status.go:176] ha-500839-m02 status: &{Name:ha-500839-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:44:25.774894  154478 status.go:174] checking status of ha-500839-m04 ...
	I1026 14:44:25.775954  154478 status.go:371] ha-500839-m04 host status = "Stopped" (err=<nil>)
	I1026 14:44:25.775971  154478 status.go:384] host is not running, skipping remaining checks
	I1026 14:44:25.775978  154478 status.go:176] ha-500839-m04 status: &{Name:ha-500839-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (254.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m34.777585589s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (74.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 node add --control-plane --alsologtostderr -v 5
E1026 14:46:40.878050  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-500839 node add --control-plane --alsologtostderr -v 5: (1m13.984498834s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-500839 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (74.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-292686 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1026 14:48:03.943863  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-292686 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m14.912836507s)
--- PASS: TestJSONOutput/start/Command (74.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-292686 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-292686 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-292686 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-292686 --output=json --user=testUser: (6.816627783s)
--- PASS: TestJSONOutput/stop/Command (6.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-411551 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-411551 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.74618ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d0e222a-d1a9-4a4a-90d6-eb2f241bc72d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-411551] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0b0e01a-c439-4453-9c56-233cae848452","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"db6f7f82-26a5-4ae3-9c6e-aa03c1de4b7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6337da1-f533-4726-8b94-0f134da3528e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig"}}
	{"specversion":"1.0","id":"1f6c66f1-8485-4fab-ad35-a843134a8ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube"}}
	{"specversion":"1.0","id":"4390e083-fac1-4ed5-a16f-ddc7eecc567e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"549666b0-6c64-405a-a65c-ba689f0ae707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"29aea134-1214-4224-922e-38885c887c42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-411551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-411551
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-816426 --driver=kvm2  --container-runtime=crio
E1026 14:48:55.621708  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-816426 --driver=kvm2  --container-runtime=crio: (34.967544974s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-818774 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-818774 --driver=kvm2  --container-runtime=crio: (37.706780933s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-816426
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-818774
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-818774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-818774
helpers_test.go:175: Cleaning up "first-816426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-816426
--- PASS: TestMinikubeProfile (75.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-549962 --memory=3072 --mount-string /tmp/TestMountStartserial2940418941/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-549962 --memory=3072 --mount-string /tmp/TestMountStartserial2940418941/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.915665651s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-549962 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-549962 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-569337 --memory=3072 --mount-string /tmp/TestMountStartserial2940418941/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-569337 --memory=3072 --mount-string /tmp/TestMountStartserial2940418941/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.793362811s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-569337 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-569337 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-549962 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-569337 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-569337 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-569337
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-569337: (1.177880727s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-569337
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-569337: (17.326117886s)
--- PASS: TestMountStart/serial/RestartStopped (18.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-569337 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-569337 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-578731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1026 14:51:40.876146  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-578731 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m35.848893836s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-578731 -- rollout status deployment/busybox: (4.624983165s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-fbnsz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-th6dx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-fbnsz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-th6dx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-fbnsz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-th6dx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-fbnsz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-fbnsz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-th6dx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-578731 -- exec busybox-7b57f96db7-th6dx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-578731 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-578731 -v=5 --alsologtostderr: (42.047761616s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-578731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp testdata/cp-test.txt multinode-578731:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3357415865/001/cp-test_multinode-578731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731:/home/docker/cp-test.txt multinode-578731-m02:/home/docker/cp-test_multinode-578731_multinode-578731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m02 "sudo cat /home/docker/cp-test_multinode-578731_multinode-578731-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731:/home/docker/cp-test.txt multinode-578731-m03:/home/docker/cp-test_multinode-578731_multinode-578731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m03 "sudo cat /home/docker/cp-test_multinode-578731_multinode-578731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp testdata/cp-test.txt multinode-578731-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3357415865/001/cp-test_multinode-578731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731-m02:/home/docker/cp-test.txt multinode-578731:/home/docker/cp-test_multinode-578731-m02_multinode-578731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731 "sudo cat /home/docker/cp-test_multinode-578731-m02_multinode-578731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731-m02:/home/docker/cp-test.txt multinode-578731-m03:/home/docker/cp-test_multinode-578731-m02_multinode-578731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m03 "sudo cat /home/docker/cp-test_multinode-578731-m02_multinode-578731-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp testdata/cp-test.txt multinode-578731-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3357415865/001/cp-test_multinode-578731-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731-m03:/home/docker/cp-test.txt multinode-578731:/home/docker/cp-test_multinode-578731-m03_multinode-578731.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731 "sudo cat /home/docker/cp-test_multinode-578731-m03_multinode-578731.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 cp multinode-578731-m03:/home/docker/cp-test.txt multinode-578731-m02:/home/docker/cp-test_multinode-578731-m03_multinode-578731-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 ssh -n multinode-578731-m02 "sudo cat /home/docker/cp-test_multinode-578731-m03_multinode-578731-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-578731 node stop m03: (1.485813629s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-578731 status: exit status 7 (320.711641ms)

                                                
                                                
-- stdout --
	multinode-578731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-578731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-578731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr: exit status 7 (326.7143ms)

                                                
                                                
-- stdout --
	multinode-578731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-578731-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-578731-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 14:53:37.406914  159949 out.go:360] Setting OutFile to fd 1 ...
	I1026 14:53:37.407154  159949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:53:37.407162  159949 out.go:374] Setting ErrFile to fd 2...
	I1026 14:53:37.407166  159949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 14:53:37.407347  159949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 14:53:37.407556  159949 out.go:368] Setting JSON to false
	I1026 14:53:37.407595  159949 mustload.go:65] Loading cluster: multinode-578731
	I1026 14:53:37.407714  159949 notify.go:220] Checking for updates...
	I1026 14:53:37.407982  159949 config.go:182] Loaded profile config "multinode-578731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 14:53:37.407996  159949 status.go:174] checking status of multinode-578731 ...
	I1026 14:53:37.410052  159949 status.go:371] multinode-578731 host status = "Running" (err=<nil>)
	I1026 14:53:37.410070  159949 host.go:66] Checking if "multinode-578731" exists ...
	I1026 14:53:37.412419  159949 main.go:141] libmachine: domain multinode-578731 has defined MAC address 52:54:00:71:7c:3f in network mk-multinode-578731
	I1026 14:53:37.412835  159949 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:7c:3f", ip: ""} in network mk-multinode-578731: {Iface:virbr1 ExpiryTime:2025-10-26 15:51:18 +0000 UTC Type:0 Mac:52:54:00:71:7c:3f Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-578731 Clientid:01:52:54:00:71:7c:3f}
	I1026 14:53:37.412862  159949 main.go:141] libmachine: domain multinode-578731 has defined IP address 192.168.39.104 and MAC address 52:54:00:71:7c:3f in network mk-multinode-578731
	I1026 14:53:37.412982  159949 host.go:66] Checking if "multinode-578731" exists ...
	I1026 14:53:37.413210  159949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:53:37.415358  159949 main.go:141] libmachine: domain multinode-578731 has defined MAC address 52:54:00:71:7c:3f in network mk-multinode-578731
	I1026 14:53:37.415782  159949 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:71:7c:3f", ip: ""} in network mk-multinode-578731: {Iface:virbr1 ExpiryTime:2025-10-26 15:51:18 +0000 UTC Type:0 Mac:52:54:00:71:7c:3f Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:multinode-578731 Clientid:01:52:54:00:71:7c:3f}
	I1026 14:53:37.415807  159949 main.go:141] libmachine: domain multinode-578731 has defined IP address 192.168.39.104 and MAC address 52:54:00:71:7c:3f in network mk-multinode-578731
	I1026 14:53:37.415991  159949 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/multinode-578731/id_rsa Username:docker}
	I1026 14:53:37.500268  159949 ssh_runner.go:195] Run: systemctl --version
	I1026 14:53:37.506389  159949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:53:37.522975  159949 kubeconfig.go:125] found "multinode-578731" server: "https://192.168.39.104:8443"
	I1026 14:53:37.523013  159949 api_server.go:166] Checking apiserver status ...
	I1026 14:53:37.523053  159949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 14:53:37.542170  159949 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup
	W1026 14:53:37.554410  159949 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1359/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 14:53:37.554492  159949 ssh_runner.go:195] Run: ls
	I1026 14:53:37.558946  159949 api_server.go:253] Checking apiserver healthz at https://192.168.39.104:8443/healthz ...
	I1026 14:53:37.564163  159949 api_server.go:279] https://192.168.39.104:8443/healthz returned 200:
	ok
	I1026 14:53:37.564186  159949 status.go:463] multinode-578731 apiserver status = Running (err=<nil>)
	I1026 14:53:37.564197  159949 status.go:176] multinode-578731 status: &{Name:multinode-578731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:53:37.564214  159949 status.go:174] checking status of multinode-578731-m02 ...
	I1026 14:53:37.565924  159949 status.go:371] multinode-578731-m02 host status = "Running" (err=<nil>)
	I1026 14:53:37.565949  159949 host.go:66] Checking if "multinode-578731-m02" exists ...
	I1026 14:53:37.568749  159949 main.go:141] libmachine: domain multinode-578731-m02 has defined MAC address 52:54:00:52:20:fe in network mk-multinode-578731
	I1026 14:53:37.569124  159949 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:20:fe", ip: ""} in network mk-multinode-578731: {Iface:virbr1 ExpiryTime:2025-10-26 15:52:11 +0000 UTC Type:0 Mac:52:54:00:52:20:fe Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-578731-m02 Clientid:01:52:54:00:52:20:fe}
	I1026 14:53:37.569149  159949 main.go:141] libmachine: domain multinode-578731-m02 has defined IP address 192.168.39.20 and MAC address 52:54:00:52:20:fe in network mk-multinode-578731
	I1026 14:53:37.569291  159949 host.go:66] Checking if "multinode-578731-m02" exists ...
	I1026 14:53:37.569543  159949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 14:53:37.571451  159949 main.go:141] libmachine: domain multinode-578731-m02 has defined MAC address 52:54:00:52:20:fe in network mk-multinode-578731
	I1026 14:53:37.571795  159949 main.go:141] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:52:20:fe", ip: ""} in network mk-multinode-578731: {Iface:virbr1 ExpiryTime:2025-10-26 15:52:11 +0000 UTC Type:0 Mac:52:54:00:52:20:fe Iaid: IPaddr:192.168.39.20 Prefix:24 Hostname:multinode-578731-m02 Clientid:01:52:54:00:52:20:fe}
	I1026 14:53:37.571814  159949 main.go:141] libmachine: domain multinode-578731-m02 has defined IP address 192.168.39.20 and MAC address 52:54:00:52:20:fe in network mk-multinode-578731
	I1026 14:53:37.571919  159949 sshutil.go:53] new ssh client: &{IP:192.168.39.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21664-137233/.minikube/machines/multinode-578731-m02/id_rsa Username:docker}
	I1026 14:53:37.652172  159949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 14:53:37.667350  159949 status.go:176] multinode-578731-m02 status: &{Name:multinode-578731-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 14:53:37.667397  159949 status.go:174] checking status of multinode-578731-m03 ...
	I1026 14:53:37.669295  159949 status.go:371] multinode-578731-m03 host status = "Stopped" (err=<nil>)
	I1026 14:53:37.669321  159949 status.go:384] host is not running, skipping remaining checks
	I1026 14:53:37.669329  159949 status.go:176] multinode-578731-m03 status: &{Name:multinode-578731-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 node start m03 -v=5 --alsologtostderr
E1026 14:53:55.617827  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-578731 node start m03 -v=5 --alsologtostderr: (39.078585793s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (288.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-578731
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-578731
E1026 14:56:40.879938  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 14:56:58.694402  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-578731: (2m47.75568069s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-578731 --wait=true -v=5 --alsologtostderr
E1026 14:58:55.619077  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-578731 --wait=true -v=5 --alsologtostderr: (2m0.411167194s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-578731
--- PASS: TestMultiNode/serial/RestartKeepsNodes (288.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-578731 node delete m03: (2.094049634s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (163.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 stop
E1026 15:01:40.875715  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-578731 stop: (2m43.070510561s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-578731 status: exit status 7 (63.89452ms)

                                                
                                                
-- stdout --
	multinode-578731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-578731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr: exit status 7 (62.262237ms)

                                                
                                                
-- stdout --
	multinode-578731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-578731-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:01:51.274600  162297 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:01:51.274715  162297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:01:51.274724  162297 out.go:374] Setting ErrFile to fd 2...
	I1026 15:01:51.274727  162297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:01:51.274904  162297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:01:51.275526  162297 out.go:368] Setting JSON to false
	I1026 15:01:51.275579  162297 mustload.go:65] Loading cluster: multinode-578731
	I1026 15:01:51.275961  162297 notify.go:220] Checking for updates...
	I1026 15:01:51.276372  162297 config.go:182] Loaded profile config "multinode-578731": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:01:51.276394  162297 status.go:174] checking status of multinode-578731 ...
	I1026 15:01:51.278300  162297 status.go:371] multinode-578731 host status = "Stopped" (err=<nil>)
	I1026 15:01:51.278316  162297 status.go:384] host is not running, skipping remaining checks
	I1026 15:01:51.278320  162297 status.go:176] multinode-578731 status: &{Name:multinode-578731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 15:01:51.278336  162297 status.go:174] checking status of multinode-578731-m02 ...
	I1026 15:01:51.279542  162297 status.go:371] multinode-578731-m02 host status = "Stopped" (err=<nil>)
	I1026 15:01:51.279555  162297 status.go:384] host is not running, skipping remaining checks
	I1026 15:01:51.279559  162297 status.go:176] multinode-578731-m02 status: &{Name:multinode-578731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (163.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-578731 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-578731 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m23.213814035s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-578731 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-578731
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-578731-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-578731-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (73.383307ms)

                                                
                                                
-- stdout --
	* [multinode-578731-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-578731-m02' is duplicated with machine name 'multinode-578731-m02' in profile 'multinode-578731'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-578731-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-578731-m03 --driver=kvm2  --container-runtime=crio: (37.944322993s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-578731
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-578731: exit status 80 (200.986556ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-578731 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-578731-m03 already exists in multinode-578731-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-578731-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.16s)

                                                
                                    
x
+
TestScheduledStopUnix (107.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-390806 --memory=3072 --driver=kvm2  --container-runtime=crio
E1026 15:06:40.879813  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-390806 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.179803977s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-390806 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-390806 -n scheduled-stop-390806
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-390806 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 15:06:43.311913  141233 retry.go:31] will retry after 76.726µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.313087  141233 retry.go:31] will retry after 151.038µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.314190  141233 retry.go:31] will retry after 264.136µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.315332  141233 retry.go:31] will retry after 394.217µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.316427  141233 retry.go:31] will retry after 391.411µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.317567  141233 retry.go:31] will retry after 687.163µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.318705  141233 retry.go:31] will retry after 975.862µs: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.319798  141233 retry.go:31] will retry after 1.511042ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.321989  141233 retry.go:31] will retry after 2.723942ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.325190  141233 retry.go:31] will retry after 3.940974ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.329382  141233 retry.go:31] will retry after 7.831694ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.337571  141233 retry.go:31] will retry after 10.035958ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.347734  141233 retry.go:31] will retry after 8.495701ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.356985  141233 retry.go:31] will retry after 9.822451ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.367204  141233 retry.go:31] will retry after 17.715806ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
I1026 15:06:43.385398  141233 retry.go:31] will retry after 33.49633ms: open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/scheduled-stop-390806/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-390806 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-390806 -n scheduled-stop-390806
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-390806
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-390806 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-390806
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-390806: exit status 7 (65.644707ms)

                                                
                                                
-- stdout --
	scheduled-stop-390806
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-390806 -n scheduled-stop-390806
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-390806 -n scheduled-stop-390806: exit status 7 (59.885028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-390806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-390806
--- PASS: TestScheduledStopUnix (107.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (140.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.549563919 start -p running-upgrade-208243 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.549563919 start -p running-upgrade-208243 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m29.31207092s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-208243 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-208243 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.799073026s)
helpers_test.go:175: Cleaning up "running-upgrade-208243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-208243
--- PASS: TestRunningBinaryUpgrade (140.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (121.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.488933682s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-535234
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-535234: (1.983282917s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-535234 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-535234 status --format={{.Host}}: exit status 7 (65.987808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (32.973874423s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-535234 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.497732ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-535234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-535234
	    minikube start -p kubernetes-upgrade-535234 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5352342 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-535234 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-535234 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (30.557846754s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-535234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-535234
--- PASS: TestKubernetesUpgrade (121.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119256 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-119256 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (92.32969ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-119256] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119256 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119256 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.041130702s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-119256 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119256 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119256 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (43.908125175s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-119256 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-119256 status -o json: exit status 2 (196.053152ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-119256","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-119256
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119256 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119256 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (26.417821651s)
--- PASS: TestNoKubernetes/serial/Start (26.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-961864 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-961864 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (114.055873ms)

                                                
                                                
-- stdout --
	* [false-961864] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 15:10:18.237183  167835 out.go:360] Setting OutFile to fd 1 ...
	I1026 15:10:18.237441  167835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:18.237449  167835 out.go:374] Setting ErrFile to fd 2...
	I1026 15:10:18.237467  167835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 15:10:18.237670  167835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-137233/.minikube/bin
	I1026 15:10:18.238172  167835 out.go:368] Setting JSON to false
	I1026 15:10:18.239004  167835 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6752,"bootTime":1761484666,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 15:10:18.239088  167835 start.go:141] virtualization: kvm guest
	I1026 15:10:18.240807  167835 out.go:179] * [false-961864] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 15:10:18.242171  167835 out.go:179]   - MINIKUBE_LOCATION=21664
	I1026 15:10:18.242228  167835 notify.go:220] Checking for updates...
	I1026 15:10:18.244185  167835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 15:10:18.245300  167835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-137233/kubeconfig
	I1026 15:10:18.246373  167835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-137233/.minikube
	I1026 15:10:18.247365  167835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 15:10:18.248260  167835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 15:10:18.249631  167835 config.go:182] Loaded profile config "NoKubernetes-119256": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1026 15:10:18.249716  167835 config.go:182] Loaded profile config "cert-expiration-553579": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:18.249801  167835 config.go:182] Loaded profile config "force-systemd-env-059721": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 15:10:18.249908  167835 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 15:10:18.284906  167835 out.go:179] * Using the kvm2 driver based on user configuration
	I1026 15:10:18.285874  167835 start.go:305] selected driver: kvm2
	I1026 15:10:18.285886  167835 start.go:925] validating driver "kvm2" against <nil>
	I1026 15:10:18.285897  167835 start.go:936] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 15:10:18.287632  167835 out.go:203] 
	W1026 15:10:18.288759  167835 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 15:10:18.289779  167835 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-961864 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-961864" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.108:8443
name: cert-expiration-553579
contexts:
- context:
cluster: cert-expiration-553579
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-553579
name: cert-expiration-553579
current-context: ""
kind: Config
users:
- name: cert-expiration-553579
user:
client-certificate: /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/cert-expiration-553579/client.crt
client-key: /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/cert-expiration-553579/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-961864

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-961864"

                                                
                                                
----------------------- debugLogs end: false-961864 [took: 3.541949269s] --------------------------------
helpers_test.go:175: Cleaning up "false-961864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-961864
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-119256 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-119256 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.50938ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-119256
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-119256: (1.420520588s)
--- PASS: TestNoKubernetes/serial/Stop (1.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (34.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-119256 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-119256 --driver=kvm2  --container-runtime=crio: (34.19202699s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (34.19s)

                                                
                                    
x
+
TestPause/serial/Start (106.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-750553 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-750553 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m46.904918533s)
--- PASS: TestPause/serial/Start (106.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-119256 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-119256 "sudo systemctl is-active --quiet service kubelet": exit status 1 (181.738691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.118687828 start -p stopped-upgrade-004195 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1026 15:11:40.876093  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.118687828 start -p stopped-upgrade-004195 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m6.041789231s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.118687828 -p stopped-upgrade-004195 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.118687828 -p stopped-upgrade-004195 stop: (1.757016795s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-004195 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-004195 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.782230688s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-004195
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-004195: (1.074182318s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (53.757943789s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m14.539798931s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1026 15:13:38.696236  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m6.920336503s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-961864 "pgrep -a kubelet"
I1026 15:13:54.387133  141233 config.go:182] Loaded profile config "auto-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jxg4d" [da4e6eb5-20a4-4d22-ae03-bbc83a58cde4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 15:13:55.618596  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-jxg4d" [da4e6eb5-20a4-4d22-ae03-bbc83a58cde4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.030961913s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h9zt2" [f1e85820-c803-4e23-beac-0281ee8987d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004184563s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m14.005631465s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-961864 "pgrep -a kubelet"
I1026 15:14:21.915184  141233 config.go:182] Loaded profile config "kindnet-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-psbkj" [a0fc17b7-393f-4929-9e8c-2391c7c47af4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-psbkj" [a0fc17b7-393f-4929-9e8c-2391c7c47af4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005389971s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-7ldcw" [22728a63-a1e7-4504-9c4a-3d61ffc7e201] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004772603s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m29.776238083s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-961864 "pgrep -a kubelet"
I1026 15:14:48.907283  141233 config.go:182] Loaded profile config "calico-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cfvph" [9a7afe71-7719-41bf-b0b3-7936f7f57f20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cfvph" [9a7afe71-7719-41bf-b0b3-7936f7f57f20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003586397s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m11.386480266s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-961864 "pgrep -a kubelet"
I1026 15:15:34.992895  141233 config.go:182] Loaded profile config "custom-flannel-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d2n69" [ee4d03b8-dfd6-4753-8a5d-bf40e3d103a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d2n69" [ee4d03b8-dfd6-4753-8a5d-bf40e3d103a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004032814s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-961864 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m23.641346239s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-961864 "pgrep -a kubelet"
I1026 15:16:18.362663  141233 config.go:182] Loaded profile config "enable-default-cni-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b4pjq" [f2a0a591-f207-44e2-b4cf-17477a94d664] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b4pjq" [f2a0a591-f207-44e2-b4cf-17477a94d664] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003761681s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zc66g" [89169b79-aea8-4fa8-9b33-b5380f7b0471] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005858775s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-961864 "pgrep -a kubelet"
I1026 15:16:36.135114  141233 config.go:182] Loaded profile config "flannel-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9pjkz" [2f9b3d9f-0901-4451-84aa-ffa4ce826ebc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 15:16:40.876153  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-9pjkz" [2f9b3d9f-0901-4451-84aa-ffa4ce826ebc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004697935s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (95.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-065983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-065983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m35.020420883s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (95.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (100.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m40.550949726s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (100.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-961864 "pgrep -a kubelet"
I1026 15:17:24.600847  141233 config.go:182] Loaded profile config "bridge-961864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-961864 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-798cq" [75b5b6f1-9e56-477c-bd9c-565d849ceca4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-798cq" [75b5b6f1-9e56-477c-bd9c-565d849ceca4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005564121s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-961864 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-961864 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E1026 15:21:50.431110  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m27.595259601s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-065983 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5c3ffe06-32fa-4408-a819-626d8027923e] Pending
helpers_test.go:352: "busybox" [5c3ffe06-32fa-4408-a819-626d8027923e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5c3ffe06-32fa-4408-a819-626d8027923e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005240784s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-065983 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-065983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-065983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094824567s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-065983 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (81.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-065983 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-065983 --alsologtostderr -v=3: (1m21.261909055s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (81.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.598971707s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (80.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-758002 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c2ce4841-544e-462b-b65c-ba20b1274683] Pending
helpers_test.go:352: "busybox" [c2ce4841-544e-462b-b65c-ba20b1274683] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c2ce4841-544e-462b-b65c-ba20b1274683] Running
E1026 15:18:54.587757  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:54.594148  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:54.605539  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:54.627009  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.00488127s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-758002 exec busybox -- /bin/sh -c "ulimit -n"
E1026 15:18:54.668732  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:54.751032  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-758002 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1026 15:18:54.912832  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:55.234308  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-758002 describe deploy/metrics-server -n kube-system
E1026 15:18:55.617971  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/addons-061252/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (88.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-758002 --alsologtostderr -v=3
E1026 15:18:55.876287  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:57.157731  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:18:59.719736  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:04.842061  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.083672  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.735674  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.742129  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.753514  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.774911  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.816334  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:15.897813  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:16.059419  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:16.381143  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:17.022631  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:18.304255  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-758002 --alsologtostderr -v=3: (1m28.960700059s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (88.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-163393 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [10785d26-2fbc-4a19-ad15-fcc4d97a0f26] Pending
helpers_test.go:352: "busybox" [10785d26-2fbc-4a19-ad15-fcc4d97a0f26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1026 15:19:20.865772  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [10785d26-2fbc-4a19-ad15-fcc4d97a0f26] Running
E1026 15:19:25.987708  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004249538s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-163393 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-163393 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-163393 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (82.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-163393 --alsologtostderr -v=3
E1026 15:19:35.565177  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:36.229699  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:42.686404  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:42.692788  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:42.704152  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:42.725558  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:42.766967  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:42.848432  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:43.009997  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:43.331723  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:43.973351  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:45.254875  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:47.817176  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:19:52.939241  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-163393 --alsologtostderr -v=3: (1m22.161225835s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (82.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-065983 -n old-k8s-version-065983
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-065983 -n old-k8s-version-065983: exit status 7 (63.51479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-065983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (37.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-065983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-065983 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (37.668589297s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-065983 -n old-k8s-version-065983
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (37.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e] Pending
helpers_test.go:352: "busybox" [8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1026 15:19:56.711253  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [8d8ee4dc-96c2-4995-a68f-f41e5f0eaf9e] Running
E1026 15:20:03.180833  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004980864s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-705037 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-705037 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (88.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-705037 --alsologtostderr -v=3
E1026 15:20:16.527352  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/auto-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:23.662876  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-705037 --alsologtostderr -v=3: (1m28.851104481s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (88.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-758002 -n no-preload-758002
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-758002 -n no-preload-758002: exit status 7 (74.450469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-758002 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-758002 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (56.650480544s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-758002 -n no-preload-758002
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-glllw" [c18e05ef-9d45-4f5e-b772-737f72f29203] Pending
E1026 15:20:35.254129  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-glllw" [c18e05ef-9d45-4f5e-b772-737f72f29203] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 15:20:35.260693  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:35.272142  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:35.293853  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:35.335328  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:35.416882  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:35.579235  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:35.901595  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:36.543569  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:37.673612  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:37.825483  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:20:40.386890  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-glllw" [c18e05ef-9d45-4f5e-b772-737f72f29203] Running
E1026 15:20:45.508705  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 19.003650242s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (19.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-glllw" [c18e05ef-9d45-4f5e-b772-737f72f29203] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005317869s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-065983 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-163393 -n embed-certs-163393
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-163393 -n embed-certs-163393: exit status 7 (66.915317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-163393 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:20:55.750066  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-163393 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (44.221984601s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-163393 -n embed-certs-163393
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-065983 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-065983 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-065983 -n old-k8s-version-065983
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-065983 -n old-k8s-version-065983: exit status 2 (239.768417ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-065983 -n old-k8s-version-065983
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-065983 -n old-k8s-version-065983: exit status 2 (249.431236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-065983 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-065983 -n old-k8s-version-065983
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-065983 -n old-k8s-version-065983
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:21:04.624753  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:16.232027  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.564078  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.570611  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.582055  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.603534  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.645132  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.726749  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:18.888200  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:19.209982  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:19.852176  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:21.133667  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (54.918494645s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jfb8" [6f2b8bd1-1759-4b3f-8f91-2fe73eef977e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1026 15:21:23.695065  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:23.947867  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jfb8" [6f2b8bd1-1759-4b3f-8f91-2fe73eef977e] Running
E1026 15:21:28.817267  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:29.934337  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:29.941038  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:29.952514  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:29.974766  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:30.016838  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:30.098386  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:30.260187  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:30.581520  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:31.223723  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:32.505516  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003955568s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2jfb8" [6f2b8bd1-1759-4b3f-8f91-2fe73eef977e] Running
E1026 15:21:35.066898  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003439344s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-758002 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037: exit status 7 (63.451833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-705037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-705037 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (44.397009215s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-758002 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-758002 --alsologtostderr -v=1
E1026 15:21:39.059516  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-758002 -n no-preload-758002
E1026 15:21:40.189063  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-758002 -n no-preload-758002: exit status 2 (239.634819ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-758002 -n no-preload-758002
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-758002 -n no-preload-758002: exit status 2 (241.343137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-758002 --alsologtostderr -v=1
E1026 15:21:40.875600  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/functional-946873/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-758002 -n no-preload-758002
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-758002 -n no-preload-758002
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-574718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-574718 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.28519422s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-574718 --alsologtostderr -v=3
E1026 15:21:57.193390  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/custom-flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:59.541516  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/enable-default-cni-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 15:21:59.595484  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/kindnet-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-574718 --alsologtostderr -v=3: (10.948336258s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-574718 -n newest-cni-574718
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-574718 -n newest-cni-574718: exit status 7 (70.106504ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-574718 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (33.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1026 15:22:10.912807  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/flannel-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-574718 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (32.781041125s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-574718 -n newest-cni-574718
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (33.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-574718 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-574718 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-574718 --alsologtostderr -v=1: (1.799605956s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-574718 -n newest-cni-574718
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-574718 -n newest-cni-574718: exit status 2 (258.377905ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-574718 -n newest-cni-574718
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-574718 -n newest-cni-574718: exit status 2 (277.872164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-574718 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-574718 -n newest-cni-574718
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-574718 -n newest-cni-574718
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-163393 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-163393 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393: exit status 2 (200.993793ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-163393 -n embed-certs-163393
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-163393 -n embed-certs-163393: exit status 2 (202.695406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-163393 --alsologtostderr -v=1
E1026 15:39:42.686549  141233 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/calico-961864/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-163393 -n embed-certs-163393
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-163393 -n embed-certs-163393
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-705037 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-705037 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037: exit status 2 (212.32252ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037: exit status 2 (209.479876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-705037 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-705037 -n default-k8s-diff-port-705037
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.40s)

                                                
                                    

Test skip (40/323)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.35
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
260 TestNetworkPlugins/group/kubenet 3.43
268 TestNetworkPlugins/group/cilium 4.29
278 TestStartStop/group/disable-driver-mounts 0.18
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-061252 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-961864 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-961864" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.108:8443
name: cert-expiration-553579
contexts:
- context:
cluster: cert-expiration-553579
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-553579
name: cert-expiration-553579
current-context: ""
kind: Config
users:
- name: cert-expiration-553579
user:
client-certificate: /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/cert-expiration-553579/client.crt
client-key: /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/cert-expiration-553579/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-961864

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-961864"

                                                
                                                
----------------------- debugLogs end: kubenet-961864 [took: 3.260404461s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-961864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-961864
--- SKIP: TestNetworkPlugins/group/kubenet (3.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-961864 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-961864" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-137233/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.50.108:8443
name: cert-expiration-553579
contexts:
- context:
cluster: cert-expiration-553579
extensions:
- extension:
last-update: Sun, 26 Oct 2025 15:09:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-553579
name: cert-expiration-553579
current-context: ""
kind: Config
users:
- name: cert-expiration-553579
user:
client-certificate: /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/cert-expiration-553579/client.crt
client-key: /home/jenkins/minikube-integration/21664-137233/.minikube/profiles/cert-expiration-553579/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-961864

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-961864" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-961864"

                                                
                                                
----------------------- debugLogs end: cilium-961864 [took: 4.12259262s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-961864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-961864
--- SKIP: TestNetworkPlugins/group/cilium (4.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-399877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-399877
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard